The Growth Fellowship with Justin Lodge
The Growth Fellowship
Who’s afraid of the big bad AI wolf?
0:00
-9:16

Who’s afraid of the big bad AI wolf?

Dr Andrew Ng is an AI pioneer (https://www.andrewng.org) and recently he was visiting his family in Singapore. I was fortunate indeed to be invited to a short lecture Andrew gave on Friday night to a small audience at Lodge Scientia.

At the very same time I was listening to Andrew, there was a man attempting to burn down a synagogue in East Melbourne while some of my friends were inside. Having received a message during the lecture, I left very quickly afterwards to talk to them.

The lecture was short but powerful. Andrew made three points that got my attention around being responsible with AI: 1. Data privacy; 2. Energy consumption; and, 3. Media clickbait BS about the AI ‘singularity’. WE have to be responsible for data privacy, WE have to be responsible about the energy consumption of the AI solutions, and WE have to be responsible for rejecting the media BS about the AI singularity.

This ‘AI singularity’ is a story about the fear that AI will become conscious and take control of humanity.

It was this point when Andrew made his point about WE being responsible for rejecting the media BS that directly-linked these two completely disparate events for me.

Then it occurred to me that all three points were related to the attack - and that it is our quality of consciousness in how we apply AI that we have to take responsibility for.

  • AI and Data Privacy

Andrew made the important point that we have to be responsible for respecting data privacy and making sure that the vast amount of data being shipped into the cloud does not contain personal identity information.

It was only a small point in Andrew’s presentation but for me it was a key one - and he talked about the fact that the engineers he talked to were generally good people and they were careful to take responsibility to make sure that people’s privacy was respected.

I’m sure the German engineers of the mechanical punched-card systems at Hollerith that the Nazis used one-hundred-years-ago to track the Jews, the communists, socialists and the “undesirables” (homosexuals, people with Down Syndrome and Freemasons) had no idea that the data would be used to send these people to their death in the gas-chambers.

 They were very-likely very-responsible in their engineering diligence, but they had no idea what the data of the new technology was gonna be processing and the final solution that it was being intended for.

The engineering disciplines underpinning the management of structured-data on punched-cards that were used to select people for extermination are still alive and well in the IBM mainframes of today.

The massive advantage of AI is that we train the LLMs to read and interpret vast amounts of unstructured documents rather than structured data.

In my latest book, I talk about how AI data has been used by the healthcare industry to automate insurance claim processing by reading vast amounts of claims documents and making automated decisions.

This week Bupa Australia was fined US$23 million for “unconscionable conduct” in denying healthcare claims and in-effect letting people die without getting the treatment that they were entitled to.

In my view, this is not a dissimilar use-case to the use of punched-card technology in the 1930s, but in this case it was a misuse of data and AI technology to process the claims documents that caused people to die.

The management of Bupa Australia took a conscious decision to use AI data processing to deny legitimate claims for healthcare treatment.

This is why Andrew’s point really stood out for me - not only we HAVE to be responsible for the AI data and how it is used, we also have to be responsible for the impact.

If we are incompetent in our quality of consciousness about the human impact, how can we be trusted to responsibly govern the application of AI?

When we are addicted to drama we are reckless and this makes us incompetent.

  • Energy Consumption

Andrew made the point that the cost of the “pipes” that we use to ship the data to the lowest-cost location for AI data processing is now negligible compared to the cost of AI. And that the energy consumption by AI is being driven by our vast appetite for AI.

At the same time, the productivity and the efficiency of AI processing is exponentially increasing driven by the hyperscalar cloud providers at Amazon, Google and Azure.

The nascent arrival to the market of quantum computing from these vendors will again drive down the energy demands of AI processing - and we’re already seeing the experiments on this deliver good results here in Singapore.

This is good news because it has the potential to rapidly reign-in the vast amounts of energy that our growing appetite for AI consumes.

While we do need to be mindful of the energy consumption that our appetite for AI can consume - Andrew made a key point that learning to engineer for the efficient use of AI resources is as important as learning to code to leverage the AI.

Later in his talk, Andrew chose to go off-topic from the energy consumption of AI technology to talk a little about geo-engineering experiments to address climate change.

The simple fact is that the amount of energy consumed in deploying AI technology in hyper-scaling data-centres is negligible when compared to the amount of energy being consumed by the military across the globe killing civilians and fighting wars for no good reason.

I would contend that developing novel and interesting experiments in how we use AI to stop wars would be a great place to start to reduce our energy consumption on the planet, rather than geo-engineering experiments.

While we chose to continue to ignore the killing of families and children and not use every means available, including applying AI, to stop doing that then this would also indicate that our capacity for consciousness is also stunted?

When we deliberately choose to ignore what is going wrong around us then this is entitlement.

  • The “AI Singularity” - who’s afraid of the big bad wolf of AI?

Andrew spoke to the fear-uncertainty-and-doubt story in the media that talks about the “AI Singularity” where AI takes control of the human population within 5 years.

Andrew has done a fair amount of research into the negative press about AI, and has yet to find evidence that this hypothesis is real.

All the negative doom-mongering bullshit about the “AI singularity” is deliberately designed to speak to our fears.

It really is a case of “Who’s afraid of the big bad wolf of AI?” propaganda.

Andrew’s point was that we have to be discerning and exercise discrimination when listening to the drama being created around AI and, in my words not his, learn to recognise the BS for what it is.

Given this fear-mongering, how long do we think it will be before we hear about someone trying to burn down a datacentre because they’re afraid of AI? Like they did for 5G mobile phone base-stations in the UK?

And this is where I made the connection between the two disparate events of Andrew’s lecture and what went off in East Melbourne at the same time.

In effect the fear-based clickbait about AI that is stoking the fires of drama today, is no different to the right-wing fascist propaganda published by the Daily Mail in the 1930s. We regularly see echoes of 1930s Nazi propaganda about the Rothschilds, Jews and the Freemasons exerting control over global geo-politics on repeated social media today. With the results that we have firebomb attacks on a synagogue in East Melbourne on a Friday night.

While we remain addicted to drama, then we will continue to allow ourselves to be distracted and overwhelmed by the bullshit propaganda that is being delivered by the big media corporations in the same way they did one-hundred-years-ago when supporting the fascists.

This isn’t just about AI, we need discernment and discrimination across ALL of the drama generated by media corporations that deliberately write their propaganda to stoke fears.

We have to fight to outwit our addiction to drama if we are to gain the discernment necessary to say ‘no’ to the fear-based bullshit.

When we choose to buy-in to the BS instead of actively saying ‘no’ to it, then we are choosing to be cruel to ourselves.

And that, my friends, is exactly what my latest book is about - about the fears that hold us trapped in our addiction to drama and allow us to be vulnerable to recklessness, entitlement and cruelty.

I will be launching the audio-edition of my new book “Fight for your freedom or die trying” on Saturday 13 September in Singapore, and I’ll be raising funds for charity by signing a limited number of paperback & hardback copies at the event.

RSVP if you want to be there, or you would like to buy a signed-copy then head to my website justinlodge.com

Discussion about this episode

User's avatar