3 Spooky AI Myths... Debunked!

AI
Large Language Models
Generative AI
Future of AI
All this AI talk is overwhelming.
I've been working on the field for over a decade and I've never seen this before. Almost everyday something new comes out. One day, it’s a new video generation tool, the next, a more capable speech recognition API. Sometimes it’s even an all knowing large language model that not only answers questions about any topic, but can also plan and solve problems... like humans do.
This rapid innovation is exciting, but it’s also a bit spooky. It's hard to keep up with the pace of innovation and stay informed, and some of the implications can feel unsettling. Here are three "spooky" myths about AI and a look at the real impact behind them.

1. AI Will Take All Our Jobs and Make Humans Irrelevant
AI is advancing rapidly and promises to automate tasks that have traditionally required human intelligence, especially tasks in white-collar fields. Indeed, many are worried that this will lead to job displacement.
AI can indeed automate many things, especially repetitive and tedious paperwork. In a podcast episode with Kit Cox from Enate, we discussed RPA being a tool that automates tasks, not processes. But AI goes beyond this, since AI is able not only to automate tasks, but also to do some planning, or even have some agency. Agents can plan, make decisions, and go in unexpected and even "creative" directions.
Yes, AI will displace some jobs, even make some of them irrelevant. But the real potential in AI is how it can help humans be more productive and more creative.

A customer of ours receives 6 million letters annually. Employees used to spend thousands of hours with tedious repetitive tasks related to these letters. I personally interviewed an employee that talked about solving 100 cases in one hour. Can you imagine the boredom? Today, AI helps them extract key information from documents and trigger automated responses. This frees up employees from data entry, allowing them to engage in meaningful conversations, provide personalized support, turning them into customer experience specialists, as opposed to "paper pushers".
Similarly, at a customer in the insurance sector, claims managers traditionally spent countless hours going through paperwork. A sea of contracts, past claims and loads of documents. Now, AI-powered tools can analyze complex claims, answer questions about past cases, and speed up the review process. This empowers claims managers to make faster, more informed decisions, transforming them into 'claims strategists' who can focus on complex cases and customer relationships.
AI can actually free humans from tedious tasks giving them the opportunity to do more creative and impactful work, or work on more interesting jobs. Modern economies value clerical and white collar jobs more than jobs in education or healthcare. Perhaps this is an opportunity for economies to better reward this type of work.

2. AI will be the end of democracy
Many (me included) are worried that AI could be the end of democracy by undermining our trust in the truth. Propaganda is not a new tool in politics, but AI makes propaganda at a large scale more accessible to extremist groups, populist politicians or anyone looking to destabilize a particular country or political system.
A large portion of the content created today on social media and all over the internet is generated by AI. Blog posts, social media posts, images, videos, and even podcasts can be produced using generative AI.
Deepfakes are becoming so realistic that today, more than ever, it is hard to distinguish between the truth and “fake news”. Fact checking information was always important, but it has never been as important as it is now.

One of my favorite authors - Yuval Noah Harari - argues that “AI's growing ability to simulate human intimacy could undermine democracy”, as bots posing as people manipulate emotions and opinions, suggesting a need to ban bots that impersonate humans to safeguard democratic discourse.
Fortunately, this is a problem that has been clearly identified. And, we’re seeing efforts to address this.
So, how do I know if a piece of content is real? What does "real" mean anyway? It's not an easy task, but I'm hopeful.
Social media platforms and AI companies are working on ways to certify human-generated content and create markers that indicate authenticity. For example, watermarking and content verification technologies are becoming standard tools, designed to help users differentiate between real and AI-generated media.
Here are some examples of the efforts being made to certify authentic content, as well as content watermarking and verification:
- Content Authenticity Initiative (CAI): “is a cross-industry community of over 3,000 members including civil society, media, and technology companies, founded by Adobe in 2019”. This initiative aims to create a standardized system for attributing and verifying content, using cryptographic techniques to embed provenance information into digital media.
- Google's SynthID: This tool watermarks AI-generated images with an invisible digital signature, making it possible to detect even if the image is edited or transformed. This obviously helps identify content generated by Google’s AI services.
- Microsoft's Azure Content Moderator: This service uses AI to detect and flag potentially harmful or inappropriate content, including deepfakes and other manipulated media.
- Truepic: This platform provides tools for verifying the authenticity of photos and videos, using a combination of cryptographic techniques and human review.
- NewsGuard: This browser extension rates the credibility of news sources, helping users identify reliable information.
These are just a few examples of the many efforts to address the challenges of misinformation and AI-generated content. As these technologies continue to develop, they have the potential to play a significant role in preserving trust and ensuring the integrity of information online.
These tools won’t solve everything, but they’re a promising step toward ensuring AI doesn’t destroy our democratic institutions.
One thing that makes me hopeful is that human-generated content might be more valued in the future.

3. AI Will Kill All Humans
This fear is perhaps the most dramatic. Yet, it has its roots in real concerns. Scholars like Roman Yampolskiy and researchers at the Center for AI Safety warn of potential risks that AI poses to humankind in general. They envision a future where advanced AI could become uncontrollable and be an existential threat to humanity.
But let's balance this with current reality.
Yann LeCun, a leading AI researcher, explains that today's large language models (LLMs) are impressive but nowhere near the intelligence of a human. The core mechanism behind LLMs is based on predicting words in sequence, which gives the illusion of “thinking,” but it's not true intelligence. LeCun also defends that an AI needs to have knowledge of the physical world, in order to be truly intelligent. So, a "real" super artificial intelligence, will be capable of understanding the "real" world, way beyond understanding language. And according to Yann, we are many decades away from such breakthroughs.
Besides this, there is lots of work happening to make sure that AI is safe. Organizations like OpenAI and DeepMind are focusing on developing alignment techniques, which ensure that AI systems' actions align with human values and goals.
.jpg&w=3840&q=75)
On the other hand, the open-source community is thriving, and sharing the technology with the world. This type of transparency will be key for a future with safe AI. If the technology is open, it will be easier to identify threats early.
Also, the community working on AI safety is growing, and the future of AI depends on responsible development and strict ethical guidelines. There are risks, of course, but I believe that enough AI safety practices will be in place, way before an AI is able to go rogue.
Summary
AI can free humans from tedious and repetitive tasks, enabling them to do more impactful and creative things.
While AI does present some unsettling possibilities, there's also a lot of work going into understanding and mitigating these risks. The future of AI could be incredibly beneficial if approached with caution, oversight, and a commitment to ethical standards.
I personally believe that AI is the technology that will make humankind sustainable, and help us bring balance to the ecosystem, provided that we can develop and use it safely.