Think Harder: The dark side of content and AI

Omnichannel Podcast Episode 39

Watch now

Understanding content and AI is essential as we increasingly integrate artificial intelligence into our societies.

This episode is a wake-up call for anyone who cares about the future of technology and society. Explore the ethical complexities of AI and technology in this insightful podcast with Noz Urbina and Larry Swanson. Learn about the manipulation tactics of big tech, the historical power of corporations, and the crucial role of government in technology regulation.

“AI itself was an accidental innovation. These technologies just kind of keep coming at us through these little explosions of innovation somewhere. And we’re so not ready for what’s coming at us or the speed that it’s coming at us.” – Noz Urbina

Show Notes

  • Yuval Noah Harari – Mentioned as an author whose works explore the implications of technological advancements and their historical contexts. Harari’s ideas about how world wars were directly tied to industrial revolutions are discussed, specifically how technological capabilities can shape global conflicts and societal changes.
    Learn more about Yuval Noah Harari and his works: Yuval Noah Harari’s Official Website
  • Skinner Box – Refers to a concept from behavioural psychology used to study conditioning. It’s brought up in the context of how Silicon Valley uses these principles to design technologies that can addict and manipulate users.
    More on the Skinner Box: The Science Behind Those Apps You Can’t Stop Using
  • Naomi Klein – Cited as a thought leader whose critique of corporate globalisation aligns with the discussion about the dangers of unchecked corporate power and their impact on society.
    Discover Naomi Klein’s insights: Naomi Klein’s Official Website
 

What you’ll learn


  • How corporations knowingly apply psychological exploits to keep us hooked on their apps and devices
  • The pace of AI advancements and our unpreparedness to handle the consequences of such acceleration
  • Why tech is really “applied philosophy” and the troubling ethical dilemmas now being codified by Silicon Valley
  • The alarming history of corporate exploitation and its parallels to today’s tech and industry giants
But it’s not all doom and gloom! We also explore:
  • Tips for thinking more critically about AI and protecting yourself from manipulation
  • The role of government regulation in reining in corporate overreach
  • Why active participation in institutions is our best hope for a better future

Also listen and download on…


Speaker(s):

Noz Urbina
Noz Urbina
Urbina Consulting
Larry Swanson
Larry Swanson
Elless Media

Full session transcript

THIS IS AN AUTOMATED TRANSCRIPT

 

SPEAKERS

Noz Urbina, Larry Swanson

Noz Urbina  00:00

Come on the air all right oh we got Why don’t we have an otter in here?

Larry Swanson  00:06

Oh you got that’s how you capture things in the field to write. You just talk to your otter and

Noz Urbina  00:14

I you know I I don’t I tend to take a lot of written notes and I’m using my

00:21

investigator mirror board a lot but every now and then you see like you saw example when I use otter is when I’m actually having a conversation. And we’re gonna go do you mind if we alter this?

Noz Urbina  00:34

Because if I’m actually just by myself on my phone, I take more nodi notes Okay, yeah,

Larry Swanson  00:48

I do both. But I’ve always had like some kind of voice recorder and lately otter has become that one

Noz Urbina  00:53

so yeah, I want to get an honour to be more than 70 close on my thing. Can you just send me a Slack message to make sure slack is that? Oh, yeah.

Larry Swanson  01:04

Testing

Noz Urbina  01:10

just making sure I don’t have any other Beeping Beeping things going on here. Yeah,

Larry Swanson  01:14

I just had a twin as you said that there was some weird beeping noise from outside but it stopped

Noz Urbina  01:32

close that one I just got you got the I got my post about the lovers blind show.

Larry Swanson  01:44

Wait, where was that in?

Noz Urbina  01:46

On LinkedIn? yesterday? Oh, yeah. Yeah, no, I

Larry Swanson  01:51

didn’t see that. That was great. I don’t know that. I think I had a tab open was gonna comment on it. And then something came up. So I have not commented on it yet. But yeah, I love that. I love that prediction. Okay.

02:10

Oh, Jesus, okay. Oh, wow.

Larry Swanson  02:22

Okay. We can have near IRL and those guys.

Noz Urbina  02:28

Alright, I’m ready to rock.

Larry Swanson  02:30

Okay, cool. Yeah, so wait, how do we how do we roll in the intro? Do you want me to just do the intro? And? Because you can you can,

Noz Urbina  02:41

I’m gonna, I’m gonna post record the intro. Okay. So you can kind of jump straight in. You can just kind of briefly introduce yourself. Like, and then I would say we just jump right in. You can even give it 10 seconds, like five seconds. Silence. And then just leave in.

Larry Swanson  03:00

Okay, that sounds good. Okay, just let me know when you want to. Okay, let me just think through. Yep. Okay, I think I’ve got a plan here. Everyone, welcome to the second instalment of the AI series that Noz and I are doing together here on his podcast, you probably know Noz is a real proponent of and an expert in using AI in your in workflows. But there’s another side of this stuff and Noz and I want to talk about that today. So Noz Can you talk, let’s start up just a little bit with the there’s sort of a, even just within the content world, there’s a dark side to the to content and and, and, and the current Internet. Can you talk a little bit about your thoughts on that? Yeah,

Noz Urbina  03:48

absolutely. I am super aware that I work in an industry where I’m trying to help brands try to build relationships online use offline and online channels in concert to know more about their their audiences and you know, influence behaviour in many cases or or support or encourage behaviours that are coming to the brand would like and I know that it’s kind of like working in in, you know, pharmaceuticals or nuclear power, you know, I’m working in something where the intent is good, but you know, there are so many dangers right around the periphery what I’m working on all the time. I’m and I’m, as a student, you know, in my little bit, my university days and in my personal life as an avid kind of avid, amateur hobbyist, if you will, of cognitive science and psychology. I’m very

04:57

concerned and sensitive about it. The Dark Side content and digital experiences and when they become omni channel experiences, are we talking about the metaverse or virtual reality, or other types of sensorial input? Just how scary all this can be everything we’re trying to do about, you know, but behaviour and intimacy and digital relationships and omnichannel relationships.

Noz Urbina  05:22

We, we have a very odd kind of perspective on these things that people, we kind of think we’re very scared about threat for physical belt, and very arrogant about fretboard psychological cells. The thing that always comes to my mind is cocaine is illegal, it just about everywhere. But in many, many, many places, slot slot machines are not. And I am not a fan of Vegas, I’ve been to many a conference in Vegas. And I remember the first time I was there, getting on to the plane and seeing that there were slot machines just in the boarding gate so that you could play like literally until you had to get on the plane to get off. And I thought, How was this allowed? That was really just the screaming pump. I always come back to how we think about our ourselves as sort of the masters of our, our internal self. Yet we see all these examples showing that we’re not, you know, we see cults, we know what brainwashing is, we understand cults of personality, we understand abusive relationships and cycles of abuse that people can break out of. But as long as there isn’t a physical substance involved, there’s this kind of Oh, well that, you know, that’s that person that, you know, their psychological weakness or something. Now, we’re starting to undo some of that stigma, but only in very select places like actual, you know, social abusive relationships, where there’s another entity involved. But what there’s another complex system, like a slot machine, which is a very, very simple device. We just think that we’re immune to the these these powers of influence. And so when we’re talking, talking with content, which is all about building relationships, and at the end of the day, working with influence, I think that we’ve seen what meta and Google and algorithmic addiction can do yet we don’t take the proper care. Yeah,

Larry Swanson  07:32

yeah, I love that. You’re a cognitive psychology enthusiast, but the Silicon Valley is crawling with people just, you know, applying it every day, it to create more and more addictive behaviour, so much like the slot machines, I think that’s how I think social media field is like to a lot of people like, oh, gotta check my feed one more time before I go to bed.

Noz Urbina  07:51

But they’ve literally added slot machine dynamics. So the pull to refresh that that the slot where you pull the bar, it goes, We’re and then stuff will come up, maybe you’ll win, maybe you won’t, that’s literally a slot machine. And that’s not by accident, as you said that people in Silicon Valley is trolling these people. A lot of people are unfamiliar with what’s called a Skinner box. A Skinner box, if you are in kind of this area is as if it’s common thing, pot pan Kitchen Sink a Skinner box, is there’s a there’s a there was a cognitive psychologist named Skinner, many years ago who was experimenting with addiction. And they were trying to find out what is the algorithm for addiction. He was and so they would put rats in a box. And they would, they would wire up the ball, the box had a feeder contraption in it so that the rat could push a bar and get some food. And they were trying to figure out, what’s the pattern that will addict the rat to keep the rat on the bar the most times are pulling the slot the most time. And this box and the dynamics of this box are well known in Silicon Valley. So game developers, app developers will actually kind of be handed out psychological literature saying how can you put these mechanisms into your game or app? You know, how can you make an addiction machine? In a in an application? It’s it’s not. It’s not a talked about practice, but it’s not a secret practice, either.

Larry Swanson  09:24

Yeah, and the other thing about that is that it’s predictable. And one of the is we’re both fans of behavioural psychology and that whole that whole realm that seems germane to this that like those those those concepts of like, like, as humans understand that Skinner box and and the predictable behaviour that comes out of that, but I think if you’re in the box, pressing the levers, you’re not as aware. Does that make sense? Absolutely,

Noz Urbina  09:50

no, absolutely. I think that we are. We have this what I call a paradox us in our perception of complexity, you know, we have our feeling that what is what to us is, is complex is complex. And what and very often the inverse. So I’m kind of naming this Knauss complexity, perception paradox, or CPP, for sure. Gotta give it an acronym. And so the Skinner box is an example. So we have a complex thing, which is the mammalian brain. And then you have the skin, the slot machine, which is an incredible, simple, incredibly simple device. But actually, we have seen that with Skinner experiments, that the box can addict, a rat, or a slot machine can addict a person to the point, they will neglect food, they will put their, you know, their children’s college funds in there, they will squander their savings. And we, when we’re in the box, we will always say no, it’s not really happened to us. It’s we’re we’re above that kind of manipulation by a simple box with a couple of wires on it. But it’s actually incredibly easy to trap a person like that.

Larry Swanson  11:15

Yeah, I It’s, as a human being I sometimes bristle at that. But it’s, you know, it’s proven over and over again. So I guess we’re just, we’re just stuck with it. But part of that comes to the way the human brain just is sort of wired. And you’ve given some examples, when we’ve talked before about that your complexity perception paradox, and I didn’t know what it was, I’m gonna call it the CPP now because it’s established. But, you know, you provide some examples of like that. The kind of not like switches, but there’s different kinds of modes of behaviour. Can you talk a little bit about

Noz Urbina  11:46

that? Yeah. So I’ll talk a little bit about what the the complexion perception paradox is really coming from. So it comes from evolutionary psychology. And the the root of that the first time I thought of it was when I was learning about how chess was the kind of the, it was originally one of the before the Turing test that was up there as one of those those huge barriers, you know, we thought that no machine could out strategize the human brain at the lofty game of chess, this was really a big deal. You know, most of the listeners are probably too young to remember this happening. But I’ve read a lot about it. I am too young to remember it happening either and happen when I was a young adolescent, but there was there was a lot of hullabaloo about this. Because it was a point of pride. It was it was kind of the, the remember the name of that story. Maybe you do. Remember the guy who, who gave himself a heart attack, try and try to lay lay train track faster than a machine?

Larry Swanson  12:51

I know the story, I can’t come up with his name either. Sorry. Yeah.

Noz Urbina  12:54

So it was intellectual equivalent of that it was the it was the human versus machine, battle of wits, which was a kind of a blow to the collective pride of human beings. And it was this complexity perception paradox is that what we play at what we practice at what we think are these incredibly challenging difficult things, chess, or music, or mathematics, statistics, the things we go, we take advanced degrees. And those are the things we think are complicated, because those are the things that we are not evolved to do. The things that we culturally have added to our society, and are at the frontier of our intellectual capacity. Therefore, they seem hard. So therefore we practice them, therefore think we think they are hard. But for example, when we try to put them into a machine, there’s this paradox. So for example, it was much easier to teach a machine how to beat human beings to play by playing a strategic game of chess, the intellectual game of chess, and it was to teach a machine how to fold a shirt. I still work on this and it Bill garbage in 2024 there’s new innovations that can make it you know 50 or 100 times slower than a human being but it’s it’s you know, that the cutting cutting edge, we cannot get machines to to fold a shirt or untangle a piece of string or catch a ball. Now they can now I think they can catch a ball. I’m not even sure. But this is because these highly highly complex activities that involve physics and perception and multiple senses being synchronised together, that that’s what evolution has been designing since we were, you know, in the oceans, all of these systems to do these things, solving math problems, is vastly easier for computer than recognising faces. You know, we just cracked recognising faces in the past A few years, playing playing a violin at a symphony level is a vastly simpler act than eating an apple and digesting it. But because those systems are so basic to us as organisms, we think that they are simple. So when we are presented with things like an AI, our perception of what’s going on is so off, because what is what may be easy for an AI to do won’t be won’t be difficult for an AI to do is, is completely foreign, to what our internal perceptions of these complexities are.

Larry Swanson  15:35

That’s why I’m all of a sudden thinking of this hilarious video, I saw once a woman showing how to fold a fitted sheet, and really like an algorithm, but it’s, but that sort of gets also, as you talk about that, I’m thinking of like the pace of change that like a lot of those things, the the ease of facial recognition, that’s a crucial thing for any human to be good at, to survive and whatever, you know, train we were at as we were evolving, and the act of eating, it’s like we don’t even think about it, but it’s it’s incredibly complex, you know, more complex and playing a symphony level violin. How does the is pace How was pace of change irrelevant to this?

Noz Urbina  16:15

I think that we’re totally unready for for technological change, and that’s kind of being consistent. I read a lot of Yuval Harare, or follow along Yuval Harare ever going to read one of his books. But

16:33

he talks a lot about how the world wars were directly tied to the industrial revolutions. So the especially the Second World War, because to to do things like have

Noz Urbina  16:51

a rally, you know, Mussolini or Hitler rally depends on the PA system. Microphones and amplifiers, you know, to have 10,000 people listening to one person speak is requires an underpinning technology. So the cult of the individual personality requires the technical ability to syndicate that individual at a high fidelity. And then you also just the simple act of waging the war, you know, you’re you’re to try to wage war at that scale without the diesel engine. And, you know, the mastering of combustion engine is is a different, it’s a different concept. Yes, they were great empires and, and so on, but to the kind of speed and the violence of the Second World Wars. And the industrial way that the they were approached is directly tied to the Industrial Revolution, I don’t feel we’ve even recovered from that, as societies. You know, we’re, we’re our quality of life. As Yuval Harare puts it, his was in many ways was was better pre industrial age. And even pre Agricultural Age, we would have more balanced diets, we would have less social inequality, you know, this, the, the, the concentration of wealth, you know, eating only one type of grain, not getting enough light not getting enough exercise. These are all industrial age things. And I think that many people don’t know this, but AI itself was was was an accidental innovation. It was it was created like like, like penicillin. So I don’t know if people know the penicillin story, you know, some guy left his cheese sandwich out, and then discovered that the cheese sandwich was killing bacteria. And discovered antibiotics. The AI was a psychologist attempting to simulate the human brain. So he thought, Okay, I’ll create an electric neuron. And then I’ll build a little electric brain, I’ll do some psychology on it. And it didn’t work because the computers were not ready to scale it up to any level where psychologists could have any use for it. But it kind of percolated along in the technology world and sat there for decades until we had the computation power to scale these brains up to see what we’re to get to the point that we are today. So these technologies just kind of keep coming at us through these little explosions of innovation somewhere. And we’re so not ready for what, what’s what’s coming at us or the speed that it’s coming at us.

Larry Swanson  19:26

Yeah, that’s why I asked about the pace too, because it seems like you know, there’s attempts at you know, like regulation or you know, attempts to shove the genie back in the bottle but, but when business and Silicon Valley money and are driving a lot of the decisions about these technologies, it’s it’s hard to keep up like whether it’s like it like in our work like at the governance level in a digital project of some kind, up to like, kind of protecting human communication from pollution with hallucinating Hello, EMS. Does that make sense? Does that does that? Absolutely.

Noz Urbina  20:03

I think well, the the the, for example, the EU just came out with the first regulations. And that was incredibly fast, you know, by legislative legislative standards, the EU was an incredibly agile wallet body. And it did things like it banned. Chinese style social scoring. So you cannot, you know, no government or institution is allowed to automatically monitor and assign people social scores, that the legal A, it’s also illegal to

20:39

counterfeit a human being. So Google’s system where you know, an AI can call on your behalf, and pretend to be a human assistant and book at a restaurant for you. It can’t do that AI systems have to self declare. And then also set danger ranks, like medical systems or military systems are at some level of danger. That’s sure thing is a prohibited level of danger, then there’s high risk, medium risk, etc. And

Noz Urbina  21:10

that’s kind of okay. But it doesn’t escape the complexity paradox. Because, yes, medical is a dangerous thing. But it completely ignores how simple a device can be, and intellectually manipulate us. So the meta I was about 15, no, but 10 years ago, that study came out with meta, where Facebook, in the New York Times put it as Facebook knows you better than anyone else. So this is an eight to 10 year old data. And they were actually able to show that the algorithm was better able to predict a person’s personality traits than any of the human participants. So with 10 likes, Facebook could know you better than your work colleagues. So people who actual human beings will actually know you wouldn’t be able to predict your, your tastes, or your preferences, add on your personality, as well as a machine, even though they go to work with you. With 70 likes, they could beat someone you live with a roommate. And then with 150 likes, they could out predict your behaviours compared to a parent or sibling. And then 300 likes, they could actually know you better than your spouse. They left off the list predict you better than you can predict yourself, which is the one that you’ve all heard already is always talking about for I’m always thinking about that one as well. None of these. None of these legislations are even touching or scratching the surface, sort of versus surface of this. And I think it’s the US, I’m pretty sure was looking at legislating based on the amount of compute required to train the models. So if you have if you needed like a supercomputer server farm to train your model, those are the more dangerous models. And the corporations are favouring that model. So I don’t, I’m sure they don’t have a very nice reason for doing this. But the corporations are saying, Yeah, okay, well, let’s regulate the the other huge models like us, that took you know, 10s of 1000s of machine hours, or whatever it is pent up flops of compute power to train it, which then ignores the fact that every computer, every computational system gets more efficient. You know, the minute that we figure out how to do it with a supercomputer, we get better at doing it on weaker and weaker computers, and Apple is just releasing, they’re finally releasing their AI thing. The whole point is it’s going to run quite nicely on smartphones. So So is like, we keep missing the point. And we’re always reacting to say, Okay, well as new technologies come out, we don’t want to harm profit margins or growth. So let’s see what happens before it before we legislate. So there’s built into the system built into the way we would like this system is a problem that lets these technologies grow and multiply in the wild a lot before we actually come to any point of understanding them are legislating and that scares the hell out of me, frankly.

Larry Swanson  24:29

Yeah, the pace. Yeah, it’s a little unnerving to think about it. But um, like you said, there’s the remarkably fast the government efforts like on the part of the EU, but that’s still not enough. Is there like, maybe not a solution, but coping mechanisms are helpful ways to understand this for people especially like I think in our worlds, we’re, you know, we’re often working with like business decision makers who are thinking about digital stuff. Like how How can people because that’s that’s the whole point, you know, we wanted to talk today really about, like getting people to think harder and better about this. Yeah. Any tips at this point for like, how to think better about those issues you just mentioned? Yes.

Noz Urbina  25:12

So, first of all, I think that there’s a there’s a terrifying trust of corporations that’s come out, you know, we, it’s this sort of dichotomy. I see people saying that they don’t trust Corporation, but the corporation’s they don’t trust, or like oil companies, or something like that. And I’m like, Yeah, sure. But it’s a tech companies who you let into your home into tech companies, who you, like, take care of your children. Adjust, I was just posting about this, and somebody posted an article about how they watched dune two, which is like a three hour movie. And it’s called the honest broker. And he’s talking about how to break free from dopamine culture. And how these tech companies have made it a business model of addiction. And he was watching this, this movie. And for the whole film, the whole three hour film, they were he was watching other people, and you can see their phones lighting up in the dark of the theatre. And they’re on social media scrolling and scrolling for the whole film. So they paid to sit in a theatre to watch a film, while they watched a social media film that feeds because they couldn’t let them let them go. Half of the biggest companies, top 10 biggest companies in the world are these tech companies who use addiction as a model. Meta, not meta, open AI, like Google, at its day, open AI came out with his lofty ethical charter, they were going to they weren’t going to do military, they weren’t going to engage in engagement based monetization. And the second that they got big, those just moved out of existence. That, you know, they used to be that it took years for a tech company to start quietly removing these things from their code of ethics. And open AI took, you know, seconds in corporate time, they basically they hit the market, and then less than 18 months later, they were making doing military contracts. And the first thing they ever monetized with ChatGPT was how long is your is your ChatGPT application hooking people into the site? So we, whenever I so first of all, be suspicious of corporations, and be suspicious of all of the corporations any, any concentration of money and power. Be very careful about how are you letting this organisation into your life, we I think we’ve all seen the children, I remember being at a dinner, and there was a child at the table who could no longer hold up their head, they were they were they were resting their head on the on the dinner table so that they could keep scrolling with a phone on their lap. So so they didn’t the neck muscles couldn’t keep up with this need to scroll what they just saw, they just kind of have found a way to take all muscular movement out of their body except for their scrolling finger. So be careful these corporations. The other thing is, when you’re if you’re involved in these, it needs automation and AI projects, I like to to give people the warning that tech is applied philosophy. That’s also for like the like the complexity paradox that’s kind of freaks people out. I’ll give a very concrete example. So if when you’re studying philosophy and decision making, there’s this problem called the trolley problem. So it’s a famous ethics problem. That is, you know, there’s a train car coming down the tracks, you’re standing next to the train tracks switch, and you have the option to let it keep rolling. And it will, you know, it’s going to kill two people, if it keeps rolling on one track, or you can pull the switch and it will switch onto another track and it will kill well one person or you know, it’s five to one whatever the ratio is, but the point is ethically is it better to not act and let something terrible happen is it better to act and be participant participating in something less bad happening etcetera, etcetera. And this kind of ethical problem has been, you know, it’s one of those complexity things where we’ve been wrestling with it for 1000s of years as we as we you know, since we invest in invented ethics and philosophy. The trolley problem becomes very literal, once it’s it’s self driving trolley. Once you have a Tesla, or a self driving car that has to make a decision and if it sees a situation where it’s, you know, family of three is crossing the road with the baby carriage. Do you use swerve and potentially kill the passengers inside the car. Or do you plough through the family? You know. So this is what was an abstract intellectual problem now has to be codified and decided by Silicon Valley corporations, they are going to make the final judgement call on what humans couldn’t decide for 1000s of years of debating. There’s a there’s someone who’s very cool on UX, and someone named Karen, how you follow her on LinkedIn. And she, she, she put a quote up, which I really liked, which is are you actually automating the poor decision making of the past and bringing it into the future? If you’re going to use historical data to predict what’s going to happen in the future, you’re just going to end up with a future that looks like the past. So when you are involved in any automating any kind of process, or putting in any kind of decision system, and you’re talking about the ethics, the question is who’s ethics? You know, think about the fact that you’re codifying ethical and philosophical things, probably without even noticing it. That’s Those are, those are two big ones. So be very sceptical of corporate corporations. And be very aware that tech is not just tech, it’s part of culture. And more and more, it’s actually codify aspects of culture. Yeah,

Larry Swanson  31:24

that Karen out quote is amazing, because it’s, she’s basically describing the way a lens works. Like, they’re just reading the whole internet. And it’s like, and that’s by then there’s a cut off point. And that’s by definition, its historical data. That’s just a tiny bit alarming. And I guess, you know, in corporations, we talked about this a little bit last week, that they don’t have the best history in this regard, even back to the founding of capitalism and corporations and, and modern economics. Do you think that history lesson is relevant here?

Noz Urbina  31:57

Well, absolutely. I think it I think that I can, I cannot give you enough reason to be scared of corporations. I know. And you kind of mentioned at the beginning, I’m a proponent of AI teach people how to use AI. And I could not stop going on about the dangers of AI. I, you know, I feel weird, you know, I work with the biggest corporations in the world, literally, I was reading the top 10, I’ve worked with some of the biggest corporations in the world. But the whole idea of a corporation is is a very scary thing. And I don’t think people are really told that story of just how much corporations throughout history have brought evils upon the world. So I remember hearing in history class, and so on, for example, that, you know, the British took over India, or the Dutch took over Indonesia, or you know, New York used to be old news used to be New Amsterdam, stuff like this from history. But I always remember that I remember the narrative as the British did that and the Dutch did the other. And there was an assumption for me, and I assume a lot of other people that that it was the governments of those countries. So the kings and queens and rulers of those countries would go when they would, colonising conquest and, you know, indulgent slave trade, etc. That’s not actually true. This, the slave trade and colonialism were corporate endeavours. These were these were publicly traded companies, the Dutch and the British, were the first to invent the a shareholder based, limited liability corporation. And that’s how a lot of these conquests and colonies were funded. You as a person like the same way we can invest in Bitcoin or buy Apple devices, knowing perfectly well, that they’re built with lithium that is that is extracted from from mines with horrible human rights conditions in Africa, and the MLMs are being trained with human exploitation. When devices are built in China, with human rights abuses, they’re the same way that we kind of casually participate in that the average shop merchant in the, in the early early days of corporations in the UK, or the Netherlands could be buying shares to enrich themselves off corporations that were trading slaves. And Indonesia, the island of Indonesia, which is now like a whole nation with millions of people in it was run by by a Dutch Corporation for 200 years. So for two centuries, it was under corporate rule until the Dutch nationalised it. It was actually the British government, who told the corporation’s they could no longer trade slaves. It was not. It was not it wasn’t a national practice. It was In the same way we bring out text technologies today, it was a corporate innovation. And then eventually the society goes, Wait a second, what are we doing? And hopefully, you know, we’re going to react faster this time, you know, society is moving faster information moves faster. But I these there, we have to see that it’s not now that corporations have gone bad. The model of corporations has always been doing this in society, leaded gas, tainted medicines, there has this is there’s been a history of this. And I think that their awareness of just how inherent that that conflict of interest is, helps us kind of be be more cautious when we’re either building these systems or participating with these systems, engaging the systems or letting them into our lives. A

Larry Swanson  35:51

quick aside on that I I’ve I’ve moved to the Netherlands about a year ago and right now I’m actually Today I’m working in Amsterdam, um, probably no more than a kilometre from the East India Trading companies first office. So nice little circular historical thing there. But as you as you talk about that, it also I’m just I listened to yesterday to a podcast with teeny Timmy gebru, the the founder of the distributed AI research institute, and she’s very critical. One of the things one of the criticisms she levied yesterday, and I’ve heard her say it before, is it, colonisers just can’t stop colonising. And in this end, she was talking specifically about Silicon Valley, sort of this test real ideology. That’s a whole other thing to talk about. But but just this notion of, and I think there’s there’s a dynamic in there, that’s relevant to what you were just talking about this, this notion, I think, some of that they probably thought they were being noble and adventurous and discovers and expanding, you know, you know, bringing good to the rest of the world or something like that. Is there a similar dynamic? Do you think in this? Or is it just is it the financial part? Or is there some kind of hubris or something like that, that’s, that’s driving that creates these dangers from corporations?

Noz Urbina  37:05

Well, corporations are collections of people. There are agreements that are made among people. And there, I think it comes back to the very beginning. You know, it’s kind of it’s tangentially related to the complexity, the CPP the complexity, perception paradox, we have to have the right mindset, what I think that we can say that we’re in late stage, capitalism. And, more importantly, late stage individualism. I think that the corporation, it is easy to get seduced by the idea that, you know, I’m investing in this organisation, and this relationship between me and the brand I’m investing in. And we’re not really too concerned about the not concerned about, we’re not able to win with our kind of mammalian brains, we’re just not able to track that many relationships. We can barely track all the relationships we have in our in our own lives. And tracking what happens to all of our investments and purchases. It’s, it’s practically impossible, we can’t but also we’re kind of celebrating it, you know, there’s there’s a celebration of individualism of there’s a kind of a me culture and you know, living your truth. And

38:34

that makes us very exposed to the message of late stage capitalism and corporate stuff. So Yuval Harare, and also Naomi Klein, if you’ve heard of her, she’s really brilliant, just Google a couple of videos or she’s pretty incredible. And they’re when we can’t get over ourselves. That’s actually what we’re Where are most vulnerable.

Noz Urbina  39:00

Harare as a, an extra several other thinkers on this point. Point out that

39:09

you know, fascism and dictatorships and cult, they butter us up with, with messages, they and they create this kind of dichotomy, but you know, it’s like, I’m thinking of like weight loss commercials in my head. You know, you want to be this, you don’t want to be that. You know, you’re great when you’re like this, this is not a good thing like this kind of grading this dichotomy of good and evil and making you feel like either you are good, and you should keep doing that. And this has been this corporation to help you be the best you can be. Or you’re in danger of getting polluted and bad. You know, you’re in danger of doing whatever the product helps you ward off. So that’s how fascism works. And why does fascism work that way? Why don’t what a corporations that work that way. because we as people work that way, you know, individually, we want to avoid the bad and get the good. And I think that I think the way to kind of save this off and how this mechanism can be avoided, is accepting the whole truth. You know. And so I’m sitting here in this as a self aware person trying to say, I’m somebody who teaches AI, and also tries to teach about the dangers of AI. I’m somebody who works with big corporations. And also I’m sitting here spending like, last 35 minutes talking about how evil they are.

Noz Urbina  40:36

I own some bitcoin, which is destroying the planet. I eat meat, you know. So I, so I’m not trying to send a message be like, No, that’s what I’m what I’m trying to send the messages at whatever you are doing, except that you’re it. The world is complicated. The world actually is complicated. And anyone who’s trying to tell you that it’s not. It is, is trying to get something out of you. So, yeah, so that I think that it’s that that our that our our weakness, when our desire to simplify, and our desire to feel good about ourselves. And anybody who anybody who’s playing into that is can be that slot machine. It makes feel great.

Larry Swanson  41:23

Yeah, I love that, that. And in terms of like helping individuals, it seems like all of us can be doing a better job of cultivating that, that mindset that has awareness of our our vulnerabilities as individualist thinkers. And also I’m just thinking, contrasting it with like the work that we do that so much about scale, and transcending this inability of humans to deal with more than like, 150% of time. There’s some really fascinating paradoxes in there. I don’t know, we haven’t talked about what we’re going to talk about in the third episode, but I anyhow, I don’t want to, you know, predict anything but but we are, we’re coming up close to time to is there anything that last night is that you want to reinforce from the conversation or just, I

Noz Urbina  42:07

guess, I think it’ll be the most controversial thing I’ve said. You know, I’ve said a lot of things which may make people bristle in this episode. But I think one of the most controversial is that we need to repair our, our relationship with government and governance. So, you know, I said earlier that there were trusting corporations too much. That’s not my opinion, the Edelman Trust Barometer, which is a you know, as our organisation which have been tracking, what are the institutions that organisations that people trust, as Edelman Trust Barometer has been tracking this, and last year, corporations were the most trusted entity. And universities, journalistic institutions, governments, they’re collapsing.

42:55

The art trust, and these organisations is just absolutely falling apart. This lovely new US Supreme Court is a classic example. But it’s, it’s a global problem. And, you know, maybe we’ll talk in the next episode about the direct relationship between algorithmic vacation and that, that that collapse, but government is our only mechanism to combat corporations. And some people are just gonna want to hang up the podcast right now. Because there’s so much Ditra. And I’ll say hate for governmental institutions and government, as in their current incarnation, that just no one wants to participate anymore. And we’re, and it’s leaving this power vacuum that of course, corporations are running and feeling to, to, to running into fill, but government is the only mechanism that societies have managed to have to, to to push back against the corporation’s Appetite for Destruction. So, but the only thing we can do is vote, you know, show our political will and change the governments we have, you know, extreme cases of revolution, but it is eventually the government, our government as an organisational system. It’s how groups of people organise themselves. So let’s what often happens is the revolution they put a new government the government, same as the old one, because Bill made up the thing human beings. So we have what we accept our vulnerability and imperfections, we talk about how bad the government is, the government is made up of people in our society. And we have to go okay, well, then I have to be part of this society, which is going to make that system better. It’s not a separate thing. It’s not people from government or not people working government, not from another planet or another species. They’re up. They just happen to have a different job. So I coming back to that, and I stutter when I say this because I know How many people cannot connect with this idea of government being something that actually has any hope? Or actually could do any good in the world anymore? That I think is that that’s the message I want to end on that our it’s our only hope. We have to govern ourselves better. And we have to organise ourselves better to take organised action to protect ourselves. That’s the message I will go out on.

Larry Swanson  45:27

I love that. And I did not expect to end here but landing on democracy and citizenship isn’t like we just got to get out there and participate more and, and take it back. Well, thanks so much now it’s a really fun conversation.

Noz Urbina  45:40

My pleasure. It was a great one. Cheers our peers.