Reflections on Yuval Noah Harari’s London Talk: Storytelling, Technology, and AI Ethics

A few weeks ago, I had a ticket to see Yuval Noah Harari, one of my favourite authors, live on stage in London. Unfortunately, I couldn’t make it on the day due to feeling unwell. Thankfully, I received Nexus: A Brief History of Information Networks from the Stone Age to AI, the book that Harari was promoting as part of this talk, included with the ticket, and a few days later, a link to the event recording. Watching the recording wasn’t quite the same as being there, but it allowed me to experience Harari’s fascinating conversation on storytelling, AI, democracy, and the need for adaptability in a fast-changing world.

I’ve admired Harari’s work for a long time, ever since I read Sapiens: A Brief History of Humankind. That book changed my view on life and priorities. Before reading it, I had a work-centric mindset that, in hindsight, was probably unhealthy. Although I love design and supporting others through my work, Sapiens helped me step back and see the bigger picture, reinforcing the importance of family and a balanced life alongside career ambitions.

Storytelling’s Power and Influence

Harari opened with a story about his childhood, where he moved from reading Greek mythology to a broader interest in history. He noted how storytelling has always shaped human behaviour. He pointed out that scientists often struggle with storytelling, which makes it harder for them to convey crucial insights to a wider audience. As he explained, people tend to follow storytellers, and stories often reinforce beliefs rather than challenge them.

One example Harari used was the very different impact of two 16th-century books: one on witch hunting and another by Copernicus. The witch-hunting book, written in an engaging style, contributed to the mass hysteria of witch hunts, while Copernicus, though scientifically groundbreaking, struggled to reach the public due to his dry, factual style. This comparison highlighted that it’s often not facts alone that drive societal change – it’s stories.

Information, Truth, and the Influence of Algorithms

In today’s digital age, Harari explained, information doesn’t just come from traditional sources; it’s also driven by algorithms. He drew an interesting comparison between the Bible and modern recommendation systems (like Netflix), suggesting that algorithms curate content for us in much the same way early religious leaders chose what stories to include in the Bible. This shift raises the stakes because, unlike past technologies, AI can make decisions on its own, picking and amplifying narratives without human input.

While science constantly evolves and corrects itself, Harari pointed out that religious texts don’t have the same mechanisms for correction. This means certain values and beliefs become fixed over time, often persisting despite new insights. Science, he explained, progresses by questioning and correcting established knowledge – a system that is largely absent in other information sources.

The Relationship Between AI and Democracy

Harari also explored the complex relationship between AI and democracy. Democracy, he argued, has always depended on the technology of its time, from the printing press to television to the internet. But AI is different; it can act autonomously, and this changes the democratic equation. Previously, people controlled what stories got “out there”, but now, algorithms make those choices, which brings up essential questions about transparency and accountability.

He suggested that algorithms should disclose when they’re interacting with people, instead of pretending to be human. According to Harari, organisations should take responsibility for the actions of their algorithms, not leave it to users. Since AI is advancing rapidly, he emphasised the need for real-time governance, ethics, and explainability to keep pace with these changes.

The Misunderstanding of Consciousness and Intelligence in AI

Harari touched on how people often confuse intelligence with consciousness. He noted that we selectively attribute consciousness to animals we care about – like cats and dogs – but not to those we consume as food, and this selective empathy is based more on emotional attachment than logic. Similarly, people may project consciousness onto AI systems, even though we lack an objective way to measure it. This highlights more about human psychology than AI’s capabilities.

Final Reflections

Reflecting on Harari’s points about storytelling, I’m reminded of a recent conversation with a colleague about its role in design. She shared an example from her team, where a designer “translates” the complex work of engineers to make it accessible to other stakeholders. The engineers have deep expertise, but their explanations can sometimes be too technical for non-specialists. The designer steps in as a translator, simplifying the technical language and using visuals to communicate key ideas, so everyone in the organisation can better engage with the work.

This kind of visual storytelling plays an essential role in design. While storytelling through words has long been central to our culture, visual storytelling adds another layer. A well-crafted visual can convey complex ideas more effectively than words alone—hence the saying, “A picture is worth a thousand words.” While visuals can sometimes be misinterpreted, combining them with clear text or speech creates a more comprehensive and engaging experience for a wider audience.

In design, storytelling bridges gaps between technical experts and non-technical audiences, making complex information more relatable and accessible. It reminds us that effective communication often requires a blend of methods to ensure clarity, engagement, and alignment with our ethical responsibilities as creators.

Harari ended the talk with a call for openness, curiosity, and adaptability. In an era of rapid technological change, he advised staying flexible and open to rethinking our views. He stressed the importance of grounding ourselves in our core values as we navigate the future of AI.

Watching Harari’s talk reinforced some important ideas for me, especially about the role of storytelling in shaping our understanding of complex issues, from history to modern technologies. Although I regret not being there in person, the recording allowed me to experience his insights and highlighted the importance of ethical considerations as we move forward in technology.

 


Posted

in

by

Tags:

Comments

2 responses to “Reflections on Yuval Noah Harari’s London Talk: Storytelling, Technology, and AI Ethics”

  1. Grant Castillou avatar
    Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

    1. Samantha Osys avatar
      Samantha Osys

      Thank you, Grant, for leaving a comment. I will take a look at both resources you have added.

Leave a Reply

Your email address will not be published. Required fields are marked *