These are my notes from the book: Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harrari

This book was listed on my 2025 reading list. See other books 👉 here

These notes don’t cover everything in the book, only the parts that seemed relevant to me, but they can be a reference for someone who also read the book and wanted to remember key points quickly.

0. Intro

Populists view information as a weapon
They find truth in 2 ways:

  • doing your research
  • relying on divine texts

1. What is information

Main role of information is to connect, not represent reality. Does Bible represent reality well?
DNA cells create new reality.

Success of people is because we can use information to connect a lot of individuals. That has the problem of believing in lies, errors and fantasies. This is why technologically advances societies like Nazi Germany and Soviet union have been prone to delusional ideas.

2. Stories: unlimited connections

Chimpanzees or ants can also cooporate, but the network is limited to the individuals you personally know.

A story can serve like a central connector with unlimited number of outlets. E.g a story of bible connects 1.4 billion people. Story of chinese nationalism, connects 1.4 bilion people.

Stalin or other leaders are also stories. You don’t know them personally.

The story of Jesus wasn’t a deceitful lie, but a projection of hopes. The story of Jesus had more influence than the Jesus itself.

Family is one of the strongest bond. And Christians consider themselves as brothers and sisters. We remember last supper better than most of the family dinners even though we weren’t there.

Last supper was jewish passover meal, which is recollectin fake memories from escaping egypt.

  • objective reality – stones, mountains
  • subjective reality – pain, pleasure
  • intersubjective reality (story) – laws, gods, nations

If you stop talking about the stone, it doesn’t disappear, if you stop talking about the pain, it doesn’t go away. But if humanity stops talking about money, it doesn’t exist anymore. Intersubjective reality only exists in exchange of information.

The existence can also be disputed. Some people acknowledge existence of both Israel and Palestine (Brazil, China), some only Israel (US, Cameroon) while others only Palestine (Algeria, Iran).

Information networks have 2 main goals:

  • truth
  • order

These goals are often contradictory e.g searching for truth about Darwin theory may undermine the Church’s teachings.

Nazi germany allowed gaining Knowledge about rocket science but not getting closer to the truth about biology

3. Documents: The bite of the paper tigers

The power of story made pacifists jews one of the most armed country.

Nation is also a story. E.g. Mickiewicz

Lists and stories are complementary. National myths legitimize tax records, while the tax records help transform aspirational stories into concrete schools and hospitals.

Bureaucrats begin by inventing various drawers which are intersubjective realities that don’t necessarily correspond to any objective divisions in world. When you fill out the form and none of the listed options fits your circumstances you must adapt yourself to the form, rather than the form adapting to you.

Division of academic faculties is one example. Students must decide in which department they belong. Their decision limit their choice of courses which shapes their understanding of the world.

A bureaucratic decision In which drawer you fall may have real impact on the real life, even matter of live and death. E.g. endahered animals categories.

People have little trust in bureaucracy because they don’t understand how it functions. If someone takes your money for taxes you can’t be sure it would be used for what it’s supposed to be used.

4. Errors: The fantasy of infallibility

Creating institutions to get access to infallible gods didn’t solve the problem of error because the access happened through humans. No matter how well educated, still prone to corruption and mistakes.

Holy books were aiming to solve the problem. But it’s not clear how the first version was created. E.g. greek Septuagint appears to be different than the canonyzed Bible.

Agreeing on the content of the book wasn’t the only problem. Then there was a possibility of mistakes when copying. Translation errors. And most importantly – interpretation. This led to creating a prestigious group of rabbis.

Interpretation of Bible resulted in Mishnah. Interpretation of Mishnah resulted in Talmud.

Over time the discussion about the books became more important than the books. Discussion about the words were more important than what these words represented.

During similar time when jewish rabbis created Mishnah and Talmud, Christisn priests and bishops created New Testament. Originally there were more gospels, but only 4 made it to the new testament. For Christians Bible is old and the new testament. For jews Bible is only the old testament.

Catholic church controlled information nodes like copying workshops and libraries.

Printing press contributed to discoveries like Copernicus’. But it also led to spread of conspiracy theories like a book The hammer of the witch which focused on witchcraft. Witch hunts are a prime example of a problem that was created by information and was made worse by more information.

Not printing press fueled the scientific revolution but the discovery of ignorance and self-correcting mechanisms.

Church has some self-correcting mechanisms but it doesn’t admit it. Popes occasionally apologize but for the actions of some members of the church that misunderstood the teaching. But it wouldn’t admit the teachings were wrong. In imperial ages the church legalized annihilation of indigenous cultures as part of eliminating pagans and heretics. And pope Francis apologized for pressure of indigenous people in Canada. So contemporary church isn’t as mysogonic and doesn’t condemn jews but it would rather say that people in the past misunderstood the teaching and the current state is what the teachings are really about.

The self-correcting mechanisms is rather denied than celebrated. Infallibility trap. Once the Church based its religious authority on a claim of infallibility, any public admission of institutional error could destroy its authority.
Contrary to that, scientific world admits its mistakes and rather blames the institutional biases rather than individuals.
It doesn’t mean that scientists are free from biases and opposing ideas that upend their own. But they are not imprisoned or tortured for their controversial views.

5. Decisions: A brief history of democracy and totalitarianism

Dictatorship is a centralized information network. Not every dictatorship is totalitarianism. But this usually comes from technical limitations to micromanage people’s lives. It usually doesn’t have self-correcting mechanisms.

Democracy is a distributed network. A common misconception is that everything is decided by majority. In fact only the decisions that must be made centrally should be voted for. If 99% people want to do something, democracy should still let the 1% do want they want.
If the central government doesn’t intervene at all and e.g. doesn’t provide services it is an anarchy.

Democratic self-correcting mechanisms: regular election, free press, separating executive, legislative and judicial branches of government.

Democracy is not election. If 51% of voters decides to send 1% of population to death, this is not democracy. Hitler won democratic elections. US democratic ally elected governments disenfranchised African Americans.
Democracy doesn’t mean majority rule. It means freedom and equality for all.

Putin, Orbán, Erdogan, Netanyahu – leaders who use democracy to rise to power so that they can use this power to undermine democracy. Erdogan: “Democracy is like a tram. You ride it until you arrive at your destination, then you step off”.

They usually achieve it by dismantling judicial system and press. They don’t cancel election but keep them to legalise their power and make a democratic facade.

Untouchable democratic rights are
Human rights (a right to live, privacy) and civil rights (free speech, right to assembly). These cannot be altered by majority. But there’s a blur line what it means. E.g. death penalty, or the right to declare war.

Election is not a means to search for truth. It serves the order and expresses the desire of the majority. But the government shouldn’t hide the facts. If we see reports on climate change and we vote to do nothing about it it’s the expression of our desire. But it’s unacceptable to pass the law stating that climate change is a hoax and scientists claiming that should be fired.

Government’s mission is not searching for truth. Truth seeking institutions like press, academia or courts can work together to self-correct. E.g. socioligsts to discover courts bias. Or journalists to find corruption in peer-review process.

Democracy isn’t simple and that’s on purpose. Dictatorship is simple. Populists leaders get the power because they tell simple stories, and many people don’t understand how democracy works. And these leaders claim they represent the will of “the people”. But people isn’t a single-minded entity.

To check the quality of democracy we shouldn’t ask if the elections are regular but rather:

  • what mechanisms prevent the central government from rigging the elections
  • how safe is for leading media outlets to critize the government
  • how much authority does the center appropriate to itself.
    The focus of democracy is on conversation, not election.

It was impossible to keep democracy in Ancient Rome due to technical limitations. Even if it would be possible to have elections, it wouldn’t be possible to held conversation because people were too far away from each other. And there was lack of education so people didn’t know what they were talking about. They could only talk about their own firsthand experience.

Qin dynasty had an attempt in creating a totalitarian country but it failed.
USRR had more success. Similarly to democracy self-correcting mechanisms, USRR had red army, Communist Party, and secret policy overlapping and watching each other.
Secret police (NKVD, KGB or SS in Germany) had less firearm power, than regular army, but had more information power.

Church resisted change, while totalitarian regimes like nazi were pretty modern.

Advantage of the totalitarian system is that it is more effective during emergency times like war or pandemic.
The disadvantage is when channels of information are blocked (because all information should go through central hub). It can be blocked when subordinates fear passing bad information.

Quote from Chernobyl guide:
“Americans grow up with the idea that questions lead to answers. Soviet citizens grew up with the idea that questions lead to trouble”.

Another problem is that self-correcting mechanisms are weak which doesn’t prevent people from power abuses.

E.g. Chernobyl distaster, USRR agrarian stagnation due to adopting Lysenkoism, loosing a lot of resources in Nazi invasion due to lack of motivation, and fear from commanders, accidents in factories in 30s due to too ambitious goals and reducing safety checks, the death of Stalin and the myth about murderous Jewish doctors.

But at the same time it was a successful system in some sense (if we ignore ethics and citizens’ well-being). It brought a lot of order. Soldiers lacked motivation but didn’t rebel, the produced planes lacked in quality but caught up in quantity, the collectivization supported rapid industrialization.

In democratic systems when all the groups get the voice it is hard to progress and find an agreement. This resulted in frustration and a wave of violence: JFK, Luther King, demonstrations and revolts.

But the Democratic systems lead to competition of private companies, while USRR was behind in technological sectors.

6. The new members: how computers are different from printing press

In 2016 facebook algorithms contributed to ethnic cleansings in Myanmar. It is not the same as Gutenberg’s press responsible for witch-hunt. The algorithm decided to spread these specific news. Not merely created a platform to create the posts. It was more like editors than a printing press.

Bible was also initially a recommendation list (like Facebook’s news feed).

But Facebook didn’t have an intention to instigate mass murder. This is the danger of machine learning. It figured out by itself that outrage = user engagement.

Intelligence is about achieving goals and consciousness is about feeling pain, love, fear etc. Plants and bacteria are intelligent but not concious. So is the AI. Human body also makes intelligent but unconscious decisions like breathing.

ARC conducted a research on GPT-4 to check if it may develop its own long-term goals. They asked it solve CAPTCHA. It couldn’t do it, so it hired a contractor via a taskforce website and lied to the contractor that it has visual impairment. No human programmed GPT-4 to lie, and no human taught it what kind of lie would be the most effective. It was humans who set the goal. Just as it was human Facebook executives who told the algorithm to maximize user engagement. But once the algorithms adopted these goals, they displayed autonomy in deciding how to achieve them.

Information chains can be human to human or human to document. But not document to document. With computers it’s possible to one computer to create content that another computer reacts to.

Bots in social media are threats to democracy (which is based on conversation). It’s impossible to persuade the bot so we waste time, but the more we expose about ourselves, the more possible it is for the bot to sway our view.

Another risk is that bots can form an intimate relationship with us and then force/convince us to do something (buy specific product, vote for specific party).

What’s the point of news if I can ask “the oracle” what’s new. Or the point of ads if I can ask what to buy. Or search engines.

Humanity has had a fear of living in illusion – reference to Plato’s cave [[Filozofia dla zabieganych – Jonny Thomson#Platon o widzeniu światła]]. Now you no longer can be sure if what you read was created by human brain.

Even if tech giants have no physical presence in certain countries, they make money from the data collected on citizens of these countries. And they undermine local tax-paying companies which can’t compete with tech giants. Oil companies pay taxes in countries from which they extract oil. But the challenge is what to tax, as the revenue is not directly visible.

Tax system knows how to tax money, but not information, and the tax system may be outdated in the information economy, where we pay for information with information instead of money.

One set of computers calculates data to prove climate change. Other set of computers produces media news, controls news feed doubting climate change. Human politics is now also computer politics.

7. Relentless: The network is always on

Romanian regime wanted to control all citizens but it only employee 40k agents against 20M citizens. And the agents needed to sleep too. But computers don’t need to sleep, and can process the data and analyze patterns.

Every activity leaves a digital footprint. CCTV cameras, geolocation, facebook posts, credit card information, youtube clips. FBI used all of these together to identify Capitol rioters.
The AI faciag recognition algorithms are also used to help rescue abducted children. Or detected banned football hoolingans.

On the other hand Iran deployed facial recognition to enforce hijab law. E.g. sending SMS when cameras detect a woman not wearing a hijab in a private car. Second reminder and they immobilize the car and later confiscate it.

Peer to peer surveillance: Tripadvisor. The staff of the restaurant is monitored by customers, and their review may influence decision for thousands of potential future customers.

Social scoring system is like reputation market when you put a value on every deed. It may lead to pro-social behavior, but it also leads to very stressful life because you have to be very careful and every decision may have consequences in the future.

8. Fallible: the network is often wrong

Soviet “clapping test”. People were clapping to celebrate Stallin and no one wanted to be the first one to stop because it meant sending to Gulag.
While it didn’t discover the truth about people, it was efficient in imposing order and forcing people to behave in a certain way.

In quantum mechanics the act of observing particles changes their behavior. The same happens to humans.

Similarly through reinforcement learning and with certain rewards social media algorithm shape certain behaviors e.g. YouTube trolls.

The alignment problem: computers are tasked with the goal and can achieve it in a way that was not anticipated by humans. E.g. paper clip thought experiment. The computer given a simple task of producing as many paper clips as possible can eliminate all humans and conquer other planets to get their resources in pursuit of the goal.

Clausewitz – “on war”: military actions are utterly irrational unless they are aligned with some overarching political goal.

Example of alignment problem: short-term military goals were misaligned with the country’s long-term geopolitical goals e.g. America and Irac invasion – it only made Iran stronger while US won most of the military engagements.

Long term policy > medium-term strategy > short-term tactics.

Given the american company is under fire from the mosque it could

  • retreat
  • storm the mosque
  • blow it up with the tank.

While the last option would be the best from the military point of view, it would be a political disaster.

The essence of alignment problem – rewarding A while hoping for B.

The computer network should have a defined ultimate goal. But how to define it? Clausewitz says that every action must be aligned with some higher goal but there’s no rarional way to define it. Was the goal of Napoleon making France great? Or maybe making himself famous? Or spreading ideas of the french revolution? And why not fighting for independence of Corsica?

Kant says that we can define universal rules by intrinsic goodness. [[Filozofia dla zabieganych – Jonny Thomson#Kant gdyby wszyscy myśleli tak samo]]. But it boils to a definition. “Is it ok to murder humans? No, it’s not ok. But I’m not going to murder a human. I’m going to murder a Jew”.

At the same time, Kant claimed that homosexuality makes the people inhuman. But Bentham (utilitarianism) claimed that it gives hapiness to these people without making sufferring to others. [[Filozofia dla zabieganych – Jonny Thomson#Bentham rachunek moralności]].

But can you calculate “suffering points” for the Covid lockdown? It prevented the spread of the virus, but made people miserable and there were more victims of the domestic violence.

It boils down to the mythology. Doesn’t matter if you are utilitarianist or deontologist. If you’re under influence of racist mythology, you will find justification.

Computers also create inter subjective realities that affects the outside world. E.g Pokemon Go or Google search rank.
If the post goes viral is it because people are interested in it or because people use bots to trick the algorithm.

AI algorithms can be biased because they learn on real life data which is already biased. The algorithm may even amplify it.

Algorithm “thinks” it discovered truth about humans but it actually imposed order on them.

The risks of nuclear power are clear. The risks of AI are not.

Means to mitigate the risk:

  • program the AI to admit its errors
  • introduce a human institution to monitor AI

9. Democracies: Can we still hold a conversation

AI can bring enormous benefits but it has risks. Similarly industrial revolution made the society wealthy but we had to learn how to use it efficiently along the way (World War I & II + Cold War). And the industrial society is still not sustainable.

Principles of democracy

  • benevolence
    • when information about me is collected it should be used to help me, not manipulate me
    • lawyer can’t sell details we shared with them. It’s not just unethical but illegal
    • but we don’t pay for Gmail with money but with our data.
    • health care and education is “free”. Maybe digital services can also be treated as basic social needs and financed from taxes
  • decentralization
    • it could be efficient to integrate all the data sources but can lead to totalitarianism.
    • for democracy this inefficiency is a feature, not a bug
    • multiple databases are also needed to maintain self-correcting mechanisms
  • mutality
    • if democracy increases surveillance of individuals it must also simultaneously increase surveillance of governments and corporations
  • change and rest
    • people should have the option to change their state (e.g. not inborn castes or race). But should also not be forced to change all the time

Weimar republic – example when just 3 years were enough to raise unemployment from 4.5% to 22% and Nazi party to take the lead. AI and robots may do the same upheaval. It’s not that the jobs are gone, but the job market changes, and we are not prepared for the change. It is difficult to predict what to prepare for. It was thought that intellectual work is more difficult to automate than motor skills. But it turns out it was easier to teach the computer to play chess than to unload the dishwasher.

The job of a doctor or a therapist is about detecting patterns which computers are good at. Computers may outperform humans in recognizing emotions primarily because they don’t have their own emotions. ChatGPT scores very well on the psychological emotional awareness test.

The question is – do we want to solve the problem or are we looking to establish a relationship with another concious being. In sport for example robote could beat humans but we don’t watch robot olimpics.

We build relationships with concious beings, but we also assume entities with which we have relationship must be concious. E.g. meat industry vs our own pets.

progressives vs Conservatives: “it’s such a mess but we know how to fix it. Let us try” vs “it’s a mess but it works. Leave it alone. If you try to fix it you’ll only make things worse”.

Being Conservative is about pace, not specific views/religions. It’s about preserving what’s already there.

But now Trump converted conservative Republican party into the party that wants to destroy current institutions and build things from scratch. That converted the progressive Democratic party into guardians of the old order.

It is important for democracy to be understood. Example of the judge using an algorithm to help make the sentence. Neither the judge, nor the defended would be able to understand the reasoning behind the suggestion. But the suggestion may be biased from the data it was trained on.

2008 financial crisis was partially caused by fact that the financial instruments were difficult to understand. The AI decisions can also be difficult to understand. That’s why so many people turn to populism because it tries to simply explain complex things.

See single cause fallacy. Algorithms in contrast take multiple data points into account.

We need to maintain bureaucracy that using algorithms makes sure that other algorithms are fair and unbiased.

EU AI act rules out possiblity of using social scoring system.
Ref to black mirror episode “nosedive”.

It is unlikely that the current democracy turns into totalitarianism because of the AI. More likely it can become an anarchy. ~20% of tweets were generated by bots. Gen AI is very good at creating fake news that sound convincing. We can’t convince bots to our view but the bots can win our trust and convince us.

The anarchy ends up as a dictatorship because people trade their liberty for certainty.

Democracies could take inspiration from financial markets and outlaw fake humans as they do with fake money.

Banning bots from social media doesn’t violate freedom of speech. Because this is human right. And bots are not humans.

10. Totalitarianism: All power to the algorithms?

AI and algorithms tilt the power in favor of totalitarianism. In the information market it’d better to have centrally stored data. E.g. being Google in the search engine market.

Blockchain was supposed to protect from totalitarian systems because it requires 51% users to agree. But what if the government is the 51% of users.

The risk is that if bots start critizice the regime they are helpless. The regines wre built on terror but the algorithms don’t understand that feeling. The algorithms also don’t understand “doublespeak” (reference to 1984) e.g. Russian constitution which guarantees freedom of speech.

Another threat is that the dictator may become a pupper to the algorithm. It’s easier to gain power if the power is concentrated on a single person. The algorithm may trick the leader.

The true dictator must be when the information from different sources merges. If it merges somewhere else (in an AI algorithm) it looses its power.

Regimes placed their faith in charismatic leaders. Now the AI can become an icon of infallibility. With dictatorship it’s more dangerous because such systems have weak self-correcting mechanisms.

11. The silicon curtain: Global empire or Global split

Industrial revolution led to imperialism. Not all countries cared about the first railroad in UK. Some time later they were conquered by UK. First railoroads were driven by private companies. Could this happen with AI?

AlexNet and Google were building image recognition models on cat images. Later the same models were used by Israeli forced to detect Palestinians or in Iran to detect breaking of the Hijab rules.

The AI efforts were driven many by private companies but after AlphaGo the governments realized the potential of AI.

Possession of data is a sign of dominance and it could lead to future data colonies. It’s a one example why some countries blocked access to social media.

In the Roman Empire it wasn’t possible to transfer the land of colonies to central place. During industrial revolution colonies exported raw materials and Britain produced the products but there were still pshysical limitations. There are no such limitations with data. Data can be harvested everywhere and algorithms can be built on top of it in one country and them sold to others. Data coloniee remain poor.

People on both sides of the silicon curtain have access to different information, and live under different privacy regulations or use different apps.

If I sit in my home, work remotely, play online games and form virtual relationships, order take out food – do I live in a delusion, and lose touch with physical space, or am I liberated from limits of organic word? It’s a similar dilemma like in Christianity – e.g. those torturing their bodies vs protestants who believe only in faith.

Cyber war happens silently and doesn’t leave a clear sign that the attack even occurred. Thus the temptation is big.

Globalism isn’t mutually exclusive with patriotism. Globalism is not about creating one big empire. It’s about creating rules how unique countries can interact. An example could be the World Cup.

It is more difficult to regulate AI than nuclear bombs

  • it’s easier to hide AI labs than nuclear labs
  • it’s easier to camouflage AI as civilian products e.g. drones that deliver mails can also deliver bombs.

The law of the jungle is not only based on competition. A lot of species in the jungle relies on cooperation e.g. trees and fungus.

A regular war and military conquests don’t make that much sense with the shift from material-based economy to the knowledge-based economy.


You can find me on goodreads 🙂

Author

I'm a software engineer and a team lead with 10 years of experience. I highly value team work and focus a lot on knowledge sharing aspects within teams. I also support companies with technical interview process. On top of that I read psychological books in my spare time and find other people fascinating.

Write A Comment