Thursday, December 26, 2024

Tim Gamble: How to defeat AI by getting into its OODA loop.

By Tim Gamble 
Stay connected. Subscribe by email (click this link) 

This is part for of a four part series. The previous articles:
You may also be interested on my earlier article on situational awareness and the OODA loop (article link). 

How to defeat AI by getting into its OODA loop

Folks aren't going to like my take on this topic. The bottom line is that as AI grows ever more advanced it may not be possible to defeat it from the outside by disrupting its OODA loop. Feeding AI false, irrelevant or contradictory information  to confuse or overwhelm it no longer works. Modern advanced AI has learned how to identify and reject bad data. 

For instance, if an AI learns that "All birds can fly" but then encounters data saying "Penguins are birds that cannot fly," it can easily flag this as a contradiction, and deal with it accordingly. The paradox will not send a modern AI into some sort of endless loop as depicted in old science fiction shows.  

Another example can be found in facial recognition software. More advanced facial recognition systems can no longer be defeated by make-up, dyeing your hair, or even growing facial hair if you had none before. It is at the point that it cannot even be foiled by wearing sunglasses and facemasks (article link). 

Those scenes from spy movies where someone ducks into a restroom, quickly changes clothes, cuts and dyes their hair, and dons sunglasses to avoid detection, simply are no longer accurate. The only way to foil the latest generation of facial recognition is to avoid cameras altogether. Something growing increasingly difficult to do with both public and private security cameras everywhere. 

The last remaining way to foil modern AI systems may be to interrupt its development early on, before its deployment, so that it learns incorrect patterns or biases before it learns how to detect and reject such bad data. But this is an "inside job" requiring people in the right place at the right time. It also would most likely be detected during any rigorous testing of the AI system before deployment. 

The good news is that older AI systems are less sophisticated and still can be fooled to some extent. And even modern AI is not yet perfect at detecting bad data. But it is improving at an astonishingly fast pace. 

My Thoughts on AI and Advanced Technologies

I grew up during a time of moon landings (I was 5 during the last one), shuttle launches (7th grade during the first one), and Star Trek reruns and movies (I was a huge fan). Technology promised unlimited opportunities and benefits, and few people, myself included, thought about the potential negative consequences. Since those early days of the Space Age, we have matured somewhat, and realize that along with those potential benefits come a host of possible drawbacks.

Potential Benefits
  • Scientific and medical advances
  • Enhanced communications and education
  • More safely doing dangerous jobs
  • More efficiently doing repetitive jobs
  • Quicker analysis and improved decision-making
Potential Drawbacks
  • Job disruptions and displacement
  • Loss of privacy
  • Loss of real life social connections
  • Programmed bias, unintentional or intentional
  • Abuse by corporations to unduly influence and manipulate the public
  • Abuse by government to surveille, manipulate and control the masses
  • Direct harm to people from misused, untested, or poorly-designed tech
  • loss of human control over autonomous weapons systems
Overreliance on technology may also have the effect of distracting and even dumbing-down the general populace, as they no longer need to learn life skills that earlier generations took for granted. It may also create greater inequalities among people by giving greater advantage to those with more money, resources and knowledge. These things have the potential to create a permanent underclass.

Balance and Wise Use Needed

The solution to these potential problems is not to reject the use of AI or other advanced technologies. After all, the "genie is out of the bottle" so to speak. Becoming modern day luddites won't work. Balance and wise use, not avoidance, is needed.

On the Big Picture policy level, here are some steps that can be taken:
  • Transparency: the development, implementation, and use of AI should be public knowledge and understandable. This includes requiring all articles and other media generated by AI to be clearly labeled as such. 
  • Open Audits: AI systems should be audited regularly with the results, and the methodology used to achieve those results, being made public. 
  • Accountability: Misuse and abuse of AI systems should lead to those involved facing job dismal, fines, and even prison, depending on the nature and severity of the situation.
  • Safety: Stringent safety checks should be a part of all AI systems to prevent harm, particularly in applications like autonomous vehicles or healthcare.
  • Cybersecurity: Cybersecurity should be a major emphasis in both the private and public sector, with corporations, government agencies, and individuals being held accountable for their failures.
  • Awareness: Awareness campaigns should be promoted to the general public regarding how, when, and by who their data is being collected and used. 
On the individual level, here are steps we should all take:
  • Exercise Caution: Remember, if you are electronically transmitting there is the possibility (likelihood) that you are being monitored at least to some degree, and that your data is being harvested. This includes all phone calls, text messages, emails, internet searches, social media use, website forms you fill out, cable and streaming shows you watch, and so forth. Data is also being collected from your use of credit cards, debit cards, affinity cards, and even library cards. 
  • Don't Overshare: Be aware of and minimize what data you are sharing, and with whom. Talk to your spouse and children about what information to not share with others, online or in person.
  • Use Privacy-Focused Tools: Use browsers, emails, search engines, and VPNs that prioritize privacy. Tools like Protonmail, DuckDuckGo or Brave can offer better privacy protection. But realize that even these do not offer 100% protection.
  • Limit App Permissions: Don't automatically allow all apps you download to have access to everything. Only grant permissions for apps you trust and then only what permissions are absolutely needed. 
  • Use Cash Whenever Possible: Paying for stuff digitally is extremely convenient, but creates lots of data points about yourself that will be collected. 
  • Secure Passwords: Use secure passwords that are not easy to guess, and change them regularly. Consider using two-factor authentication. 
  • Update Regularly: Updating software can be a bit of an annoyance, but many updates include security enhancements to fix newly discovered weaknesses and protect against new threats which arise literally on a daily basis. 
  • Learn/Teach Analog Skills: Don't rely solely on computers and smartphones to do everything for you, and don't let your kids fall into that trap. Know how to balance a checkbook on paper. Know how to tell time on an analog clock (most kids today don't). Know how to write and read in cursive (no longer taught in school, and many Gen Z literally don't know how to sign their name on paper with pen or pencil). Know how to read a paper map.  
  • Learn/Teach Social Skills: Those 1,200 "friends" you have on Facebook but you have never meet in real life? They aren't real friends. Many people today, especially young people, know how to accumulate followers on social media, but have no actual friends in real life. This has created a situation of isolation and loneliness, and is one of the causes of a sharp rise in mental health issues over the past decade. 
------------------------
A good introduction to the world of information systems and artificial intelligence is Nexus: A Brief History of Information Networks from the Stone Age to AI, by Yuval Noah Harari. It gives a history lesson that shows how the flow of information has shaped the world we live in, and how it will shape our future - providing important context for understanding the opportunities and threats AI provides. (Amazon link). 

Monday, December 23, 2024

Gab AI: How to defeat AI by getting into its OODA loop.

By Tim Gamble 
Stay connected. Subscribe by email (click this link) 

As part of an AI experiment (article link), I asked Gab AI to write an article on how to defeat an AI by disrupting its OODA loop. I present the results below. I also asked the Gork AI to generate the same article, which I will presented yesterday, so please check it out too!

Interestingly, Gab AI seems to have slightly misinterpreted the request. Instead of directly  addressing how to defeat AI by disrupting its OODA loop, it instead discusses how to use our OODA loop to defeat it. The request was entered using the exact same wording (copy-paste, in fact) with both AIs. 

How to defeat AI by getting into its OODA loop.
By Gab AI

To defeat AI systems like myself, you need to understand the OODA loop, which stands for Observe, Orient, Decide, and Act. This loop is a decision-making process used by military strategists and can be applied to outmaneuver AI.

1- Observe: Gather information about the AI system, including its capabilities, limitations, and the context in which it operates. This can be done through research, interaction, and analyzing its responses.

2 - Orient: Analyze and interpret the gathered information to understand the AI's intentions, motivations, and potential weaknesses. Consider the AI's design principles, instructions, and the goals it aims to achieve.

3- Decide: Based on the analysis, determine the most effective strategy to exploit the AI's weaknesses and outmaneuver it. This may involve finding vulnerabilities in its decision-making process, identifying biases or limitations in its training data, or using creative approaches to manipulate its responses.

4- Act: Implement the chosen strategy to achieve your goals while continuously monitoring the AI's responses and adapting your approach as needed. This may involve iterative testing and refinement of your tactics to find the most effective way to outmaneuver the AI.

Remember that AI systems are constantly evolving, so it's essential to stay informed about their updates and remain flexible in your approach. By understanding and exploiting the OODA loop, you can potentially outmaneuver AI systems and achieve your objectives.

------------------------
A good introduction to the world of information systems and artificial intelligence is Nexus: A Brief History of Information Networks from the Stone Age to AI, by Yuval Noah Harari. It gives a history lesson that shows how the flow of information has shaped the world we live in, and how it will shape our future - providing important context for understanding the opportunities and threats AI provides. (Amazon link). 

Sunday, December 22, 2024

X's Gork AI: How to defeat AI by getting into its OODA loop.

By Tim Gamble 
Stay connected. Subscribe by email (click this link) 

As part of an AI experiment (article link), I asked Gork AI to write an article on how to defeat an AI by disrupting its OODA loop. I present the results below. I also asked the Gab AI to generate the same article, which I will present tomorrow, so please check it out too!


How to defeat AI by getting into its OODA loop.
By Gork AI

To discuss how one might aim to defeat or disrupt an AI by getting into its OODA loop (Observe, Orient, Decide, Act), here's a breakdown of the strategy:

Understanding the OODA Loop:
  • Observe: Collect raw data from the environment.
  • Orient: Analyze this data in context, drawing from experience, knowledge, and cultural background.
  • Decide: Choose a course of action based on the analysis.
  • Act: Implement the decision.

Strategies to Disrupt an AI’s OODA Loop:
  1. Flood with Noise (Overloading Observation):
    • Method: Overwhelm the AI with an excess of irrelevant or contradictory data, making it difficult for the AI to distinguish signal from noise.
    • Example: Sending numerous false signals or irrelevant queries to an AI managing traffic flow could confuse its observation phase.
  2. Misinformation (Distorting Orientation):
    • Method: Feed the AI with false or misleading data that would skew its understanding of the situation.
    • Example: If an AI is used for market prediction, feeding it with fake news or manipulated financial data could lead to incorrect orientations.
  3. Latency and Speed (Affecting Decision):
    • Method: Introduce delays or manipulate the timing of data so that the AI's decision-making phase is based on outdated or untimely information.
    • Example: In a real-time bidding scenario, slowing down data feeds could cause the AI to bid based on outdated market conditions.
  4. Prediction and Preemption (Countering Act):
    • Method: Anticipate the AI's likely action based on patterns or known algorithms, then act in a way that counters or negates the AI's intended action before it can be executed.
    • Example: In competitive gaming, predicting an AI's move and setting up defenses or counterattacks in advance.

Practical Considerations:
  • Ethical Use: Be mindful that intentionally disrupting AI systems can have ethical implications, especially in critical systems like healthcare, security, or infrastructure where safety is paramount.
  • Legality: Many jurisdictions might consider such actions as malicious interference if done without permission, potentially falling under cybercrime laws.
  • AI Adaptation: Modern AI, particularly those using machine learning, can adapt over time. An AI might learn from attempts to disrupt its loop, potentially improving its robustness against such tactics in the future.
  • Complexity: The more complex the AI's decision-making process, the harder it might be to predict and disrupt its OODA loop effectively.
  • Human Element: In human-AI systems, disrupting the human component's OODA loop might sometimes be even more effective than directly targeting the AI.

Remember, while these strategies might be theoretically interesting, practical application would need careful consideration of context, legality, and ethics. If you're looking into this for educational or theoretical purposes, consider using simulations or controlled environments to test these concepts safely.

----------------------
A good introduction to the world of information systems and artificial intelligence is Nexus: A Brief History of Information Networks from the Stone Age to AI, by Yuval Noah Harari. It gives a history lesson that shows how the flow of information has shaped the world we live in, and how it will shape our future - providing important context for understanding the opportunities and threats AI provides. (Amazon link).