Think again: Untapping AI opportunities

Omnichannel Podcast Episode 36

Watch now

A low-tech explanation of AI types and how to think about them. To get the benefits of AI we need to change how we think about computers themselves and their role in our workflows.

This session goes deep into the transformative power of artificial intelligence across business, creativity, and more with AI expert Noz Urbina, alongside our guest host Larry Swanson. They try to give you new mental models that will help you get the most out of new AI technologies.

“AI enables us all to be programmers without being coders. It’s about understanding the problem, its context, its constraints, and the intended results, and being able to express that logically and clearly, understanding both your own bias and the AI’s bias. That’s what lets us win the future.” – Noz Urbina

Listen now


What you’ll learn

  • The Evolution of AI: Understand the rapid development of AI technologies and their significance compared to historical technological advancements.
  • Generative AI and Creativity: Explore how generative AI is pushing the boundaries of creativity and content creation, from writing to design.
  • Ethical and Societal Implications: Gain insights into the ethical considerations and societal impacts of widespread AI adoption, including issues of reliability and bias.
  • Practical Applications of AI: Learn about the practical applications of AI in business, content strategy, and beyond, including tips for effectively integrating AI into workflows.
  • The Future of AI: Speculate on the future directions of AI technology and its potential to transform industries, economies, and our daily lives.

Also listen and download on…


Noz Urbina
Noz Urbina
Urbina Consulting
Larry Swanson
Larry Swanson
Elless Media

Full session transcript



Larry Swanson  

Hi, everyone. Welcome to the Omnichannel podcast, hosted by OmnichannelX. Noz Urbina is the sort of the information arm of the Knauss business empire. He also runs Urbina Consulting. I’m Larry Swanson. I’m the guest hosts today, not as host this podcast all the time. And he was he we were talking about this concept a few weeks ago, and we were like, well, I know a guy who can interview people on a podcast. Let’s bring him over.

Noz Urbina /UC  

So I want to do a little bit on my thoughts on AI. And I thought it would be just be awkward and weird for me to just come on here and blab into the into the microphone. So I thought it would be better to have a conversation with somebody who knows the industry you could ask intelligent questions, but I think I just have a lot to say on this topic. And I’m sort of bursting at the seams. There’s some recurring concepts that keep coming back again and again. And it’s a very big thing. I see it as the biggest thing of my lifetime, you know, I think it’s bigger than any movements or political turmoil or war or anything. So this is, you know, that big that I want to really set aside some time to focus on it. So that’s the kind of the idea of this sub series of, of omnichannel podcasts looking at AI and that’s

Larry Swanson  

really saying something because you go back to the start of the interwebs. And that seemed pretty big at the time and big, even in retrospect. So the fact that you see this even bigger things here. He also mentioned there’s some of the insights you’ve had that you want to share. And I was particularly inspired by one that you shared and I can’t remember was one of your podcasts, but any of you you’ve made the observation that we all grew up thinking that computers are these infallible, reliable, accurate things. You can give it the same inputs 1000 times you get the same answers of 1000 times. That’s a little different nowadays with these new gadgets. Can you talk a little bit about that?

Noz Urbina /UC  34:43

Yeah, so there’s some confusing English they’re so infallible but are totally reliable. But as opposed to what we have now, so the the phrase unreliable computers kind of hit me when I was starting to work with them. And I there’s this constant you know, for start with the beginning. There’s there’s a large debate since the generative AI boom started. So when ChatGPT 3.5 hit, and we started to see, really it become a mainstream concept basically 2020 23 It really in January, February 2003, the buzz was starting. And then by the end of the year, it was the biggest thing on the planet. It’s shifting the entire stock markets and the global economy. So last year was big, you know, 2003 was huge. It and what was one of the big topics was that these AIs were incredible at being able to generate, create and simulate human creativity and content so they could create images that could create texts. They could write poems, like or write essays, they could write computer code. etc. But again, and again this term again, in a game this funny irony, my Alexa just woke up. When I said that the the difficulty with or one of the recurring criticisms was that these computers would also hallucinate was the term that it came to be for when when an AI when asked a question would make up facts, they would just fake news their way through an answer, because they you know, that’s what they’re designed to do. They were they were told to come up with an answer that and artificial intelligence is do what they are told in the sense that they are chasing an objective, you know, they have objective certain priorities, like come up with an answer, come up with an answer that’s politically correct. Come up with an answer that’s dangerous. You give them all these goals, and then they do everything they can to try to achieve those goals. In pursuit of that come up with an answer every time goal. Sometimes they would just reach into their collection of training information, and throw it together in a semi random way. Something that just sounded right to the AI and could be completely what we would call a complete fabrication. So if you people have lots of examples of people looking up their, their, their own bio, and then it tells them that they went to school, they didn’t go to or win a prize. They didn’t. They didn’t win or you know, wrote a book they didn’t write. They’re just making things up. So these hallucinations are inherent in the way that these generative AI systems work. Because it’s that kind of crazy creativity that allows them to create anything in the first place. You know that, too, otherwise, you would just get the training, they would be a search engine, you would just get the training data back, they will just tell you what they knew and nothing more. Their ability to actually create comes from the fact that they are a little you know, off the off the wall and but this then presents the question to us. How do we square that with what we know computers to be these infallible, repetitive calculating machines that just take over our rote, repetitive tasks? And that’s the challenge I’m helping organizations with a lot today is how do you use a computer that’s a little bit of a, of a wacky artist. As opposed to you know, I Doughty, a lockdown economist, where we all used to

Larry Swanson  

write and yeah, that dowdy locked down economist might be better at providing answers than a creative writer, but the greater writers closer to an LLM. Anyhow, it’s it’s sort of like it kind of gets at the rate. It seems that the range of capabilities that maybe we’re just we’ve been in these kind of old Babbage computing machines for 6080 100 years, and now we’re like, oh, they can do other stuff as well. I wonder if that’s part of it. Any one of the capabilities that arises from this? Creativity is like part of that. One of the the analogies of that is like, if you’re just brainstorming groups need a bunch of ideas. You can just, you know, crowdsource something or hire a bunch of interns and you I’ve heard you like and these AI agents to interns. I love the way you talk about it. Can you tell us a little tell us a little bit more about that?

Noz Urbina /UC  39:17

Sure. A little bit. So the you know, the unreliable computer thing is how do you how do you work with a computer that can be creative, but can’t be trusted? And so when I was first reframing AI, I would tell people, imagine you were just given a team of 10,000 interns fresh out of school. You know, you you could get an incredible amount of done with them. They’re there. The power there is amazing, the brute force power to be to create to do things to try things to do you know, some complex but unfun tasks, but can you trust them? No, you know, they have to be guided, checked, validated. You have to build process, you would have to build processes for them that take into account their needs, not treat them like you would treat another human being and not treat them like you would treat a computer. So when I say human beings are obviously interns are human beings, but you don’t treat them like an unexperienced person who you know who has been doing that job for years who you just hire because they can just do the job. There’s a phrase that Sam Walton uses tasks not jobs, so they can do tasks, but they can’t just take someone’s job. All in because of this unreliability thing they so I like to try to train people to imagine that they’re working with an intelligence rather than working with a computer, but it’s an intelligence that needs to be part of a team and part of a workflow. That makes sense with the kind of slightly jittery unreliable nature of regenerative AI.

Larry Swanson  40:57

As you talk about that, I’m picturing sort of a new kind of management skill that’s needed here to engage these new collaborators, enthusiastic, ubiquitous, really knowledgeable about a lot of stuff, but boy, do they need a lot of guardrails and boundaries. I know it so how do you operationalize that like at every level, like just corralling them and beginning sort of a general framework for how to manage them? And then there are so many, like you said, they’re good at tasks, not jobs, and then sort of how do you focus them on get them focused on the right specific task?

Noz Urbina /UC  41:32

Yeah, so that leads me into another kind of reframing phrase that I use, which is that AI enables us all to be programmers without being coders. So the full version of that is that we have to think about the problem. Its context, its constraints, the intended results, like what do what do you want out of it, and be able to express that logically and clearly, understanding your own bias and the AIS bias. That’s what kind of lets us win the future. So when you’re building a workflow, you have to kind of lay that out in advance, you have to become a very good leader. So a good leader. You know that we talked about smart goals we talk about OKRs how you set goals, how you set priorities, how you give instruction, how goals ladder up to other goals, that’s kind of the the the hallmark of good leadership. And so working with AI does make you this kind of leader of this 10,000 interim team, but requires you to up your game, and how you structure your tasks, how you think about how you’re going to approach things. I really saw a transformation in my thinking, in 2023. I would i i started kind of jumping into these these things and playing with them. Early in the year now we’re going to talk about I would like to talk later about other types of AI which we’ll be using for for much more than a year. But at the beginning of the year, I was kind of experimenting and exploring and I by the end of the year, I was waking up in the middle of the night writing prompts in my head because you know it kind of restructures your thinking patterns to think what information will this intelligence need? Know how can I best make it clear what I’m looking for? You don’t just treat it like a search engine saying give me this or give me that you kind of have to think what what is it? What is the scenario we’re thinking about? What context do I have to supply so that scenario is clear. I think I can give lots of examples. One of the one of the common ones we do is persona simulation and journey mapping. So we we we try to set up an AI with a full context of a persona, you know, all the research we have available on the business. What kind what are the typical background, what are their what is their education level, a lot of that kind of some of some of that demographic stuff, which kind of already filters the AI to have a certain focus and then explaining to it we are going to be doing these things called customer journey maps or experience maps and understanding the concept of of stages and that during the stage, you might have touch points that you access content through or you might have design interfaces that you’re you’re looking at or you might you have questions, you know, what are the what are the informational questions you have as a user when you’re trying to use an interface or access content. And how does that break out of those don’t break down over time and, and type you know, transactional? Transactional questions, informative questions, and so on. So, before we just start asking the AI to do all this, we explained these high level concepts and we explained them in a consistent way so that all of our kind of persona simulations and our journey mapping processes will operate in a consistent way. And I have found that to be really like giving it that structure of this is the tasks we’re working on. This is the why we’re doing it. This is who you’re doing it for and this is how I would like it to come out. You’re you’re seeing if you follow a lot of AI stuff on LinkedIn, you’re constantly seeing these prompting guides. And what’s what I feel very validated with is that they’re all kind of the same and they’re all we’ve all arrived at the same place and now those are being officially released by by open AI and by Facebook and by Microsoft. And they say the same thing. You treat the AI like you treat any structured learning situation. You use consistent labeling, you use structured instructions, you make nice lists. You lay out the task context, you lay out what you want from from the objectives. And then our key one, you know, check your work, think key ones, check your work and think step by step. You know, those are, we’re seeing again and again, when you’re talking to an AI. It’s like a human, you know, if you just barrage them with a huge task or set of instructions. They get overwhelmed and confused when you ask them to just do it all at once. So we’ll get overwhelmed confused, but if you tell them to think through the problem in stages, like you would tell any any junior person or even a child learning something, they come up with better answers. And if it’s they’re discovering that it’s hundreds and 1000s of times more efficient, to have them come up with an answer or come up with 10 variants of that answer. Think about those, check which one is the best and then move on. Rather than trying to create a super AI, which you’ve trained so well that it does everything right the first time. They’re giving up on that kind of model and we’ve we’ve kind of exhausted that. And now we’re seeing that actually better structuring the instructions and then checking our work like they taught you in school is what you’re gonna get. It’s gonna give you the best results.

Larry Swanson  47:18

Yeah, as you talked about that, I’m wondering like, it sounds so much like a lot of what you’ve done a lot of your career like technical documentation, like keeping track of like how you explain the operation of a thing to other people. And a lot of that kind of content is used in training programs and things like that. Are you finding and I think most people listen to this podcast know about your background, but you go back literally to before the beginning of that world of structured content. Do you see links between that background and what you’re seeing now?

Noz Urbina /UC  47:48

So that’s interesting. You said it that way. So I go back to the beginning, I go back to before the world of the current wave of structured content. So we are now hearing a lot about headless CMS is and content models and structuring your content and omni channel. So I am I am I started my career easily a decade before, more than a decade before the popularization recent popularization of these content concepts the mainstreaming of these concepts. I was a kid, I got it. I got out of school I was I was the intern who happened to land a job in a tiny company called soft quad, who was at the forefront of building the structured web and what we call semantic structuring. era, so And before, so I came in and there were people who would have been in it for for 20 years going back to the 80s when the web itself was being developed, and I just had Steven Pemberton on our on the podcast a little while ago, he goes back to before, the before times. And so what was happening back then is that in parallel to the web, as we know it being grown, being exploding, there was always this kind of tiny community in the background saying, No, we should be structuring the content in a consistent and intelligent way. So that it’s not just for web pages, it should be for all of our channels, and it should be personalization ready and automatable. So that’s, I was born in that kind of 1999 2000 era, and have been working since then. And yeah, the fundamentals of understanding not what do I want to write, but what’s the logic of what I’m trying to accomplish? What’s the what’s the logic behind how you create a good product overview page? What’s the logic behind how you construct an email? A lot of what a lot of what becomes aspects of Content Strategy, content engineering, they’re all come back to the what’s the what’s the mental model behind these things. Headless CMS is are built with mental models instead. Of page models. Because you say, Okay, you I’m gonna make a product it’s going to have features is going to be promotions, you know, those mental models of things. That’s what we manage and headless CMS is rather than web CMS is with manage the actual pages. So yeah, that’s my background and it’s helped me a lot because once you get to AI, then you are explaining models or explaining the logic of things so that an intelligence can can do work for you, as opposed to writing in code so that program can execute it or simply using the tool as is it’s this kind of participatory process of going okay, I’ve got I’ve got a plan an idea. And this is intelligence can help me how do I collaborate with it? That’s been incredibly helpful.

Larry Swanson  50:56

As you’re talking about that, I’m reminded that like, my question was sort of like very focused and specific, but there’s so much broad applicability of these tools. And I think a lot of people I still to this, we’re recording this in, in February of 2020, for almost a year and a half after the introduction of Ted, the current GPT craze. And so much of the fuss still seems to be around generative AI. But there’s a lot more going on. And I know you’ve had some thoughts and ideas about this, like, that generative aspect of it certainly helps a lot like a lot of the stuff you’re talking about earlier about generating persona ideas and things like that. But there’s Can you talk about some of the other capabilities that people should be thinking about as they look to incorporate AI?

Noz Urbina /UC  51:39

Yeah. And so I think this is there’s a good transition here because the, what you’re just asking me about I think the thing that is most immediately applicable if people are looking to learn, like read this and go okay, well, how can I make my my ChatGPT experience better tomorrow? You’ll headings consistent headings and templates in your prompts. Whenever you speak to an AI, don’t just tell it what you want. Try to kind of break it down like context, right? The word context, task rules, or validation steps, output format, you know, think of what your consistent headings are. And then when you’re in the task itself, if you want it to write a LinkedIn post, define what’s the structure of a LinkedIn post, you know, you want your hook at the beginning and you want your key points and you want your call to action in your when your hashtags. So put those actual structures in and use those terms and structures consistently. And, you know, you can configure AI so that they remember them so you can have your writing AI or even your LinkedIn post generating AI, if you get if you use that consistency over and over and over again. And so some of that kind of bridges very nicely with some of the other areas when I talk about AI to kind of get a little bit, get a little bit frustrated that we’re talking so much about ChatGPT and generative AI, which is incredibly useful and powerful. And you know, it has changed my career. We’re so much more productive and we can help clients do amazing things with it. But that is not the sum total of AI. So the AI is that the types of AI the areas of AI I think they’re not getting enough traction attention are things like categorization AIs. So one of the things that we need to make the world a better place is more tagging, you know, everything that we create, if it’s digital. We need to put metadata or tagging or categories, taxonomy ontology, what to choose your name for how you describe the things so that we can find them, manipulate them, transform them, and for for decades, well that’s been one of the things that AIS could do really well, you know, there was a time is like, compared to automatic translation. automatic translation in 2002 was literally a joke like it was, you know, we would hold it up and laugh at it, but how terrible it was. I saw skits about it on comedy shows, or you put stuff into a translator and see how funny it was when it came back. And now it kind of it really works. You know, you gets you pretty much something really, really decent. I’ve seen professional translators whose jaws hit the floor. When they saw the output of what translation AI can do today. Same thing with auto taggers, you know. Auto tagging is an incredibly powerful thing. And it helps you increase your content quality, increase your understanding of what you’ve got in your, your repository of whatever it is digital assets, images, videos, whatever it is, there’s there’s ways to automatically tag it these days. So there’s all sorts of applications for that. So if you were to automatically transcribe all the calls, you got to sales support, training, help desk if and auto tag them, you know, imagine how much business insight you would unlock. So first you AI transcribe them then you AI auto tag them. And then you can ask questions like, What are the most common problems that our users are having? And then split that by region? So you know, what are people in this geography calling in about or where are they in the stage cycle, you know, in the relationship lifecycle, you know, what are our customers asking as opposed to pre sales questions? So all this stuff gets unlocked if you if you use AI not to just generate stuff, but to analyze and sort out the stuff that you have. Then the next one is how do they tag it? You know, are your are you just looking for frequent keywords like a sort of like a search engine, that’s not necessarily the most effective thing. The area of AI which I think is most powerful and most under presented is what we call knowledge modeling. So how do we actually describe the intelligence to the AI, which is a common thing, like you ask an AI if I was to ride my bike over a bridge, and under the bridge, there was lots of nails and broken glass, what would happen? And the AI goes well, your tires would probably be damaged by the nails and broken glass. Because linguistically it understands that you know, if I drive a bike over nails and broken glass, I’m going to hurt my tires. The fact that there’s a there’s a bridge in between is because the AI does not have a mental model of these things going on. Similarly, search engines you know, if you’re talking about about ancient Egypt or killer whales, you know, you could also be, you’re also talking about orcas, which are the same thing as killer whales. There’s a synonym relationship. Or if you’re talking about ancient Egypt, you can be talking about, you know, pyramids and tombs and pharaohs and stuff, but that could also relate to burial rituals and modification. So these relation of ships have concepts and understanding the the wider relationships of all these tags that we’re applying. That’s a whole discipline in itself. There’s specific software for doing it. And if you give your AI access to that software, it is so much more powerful. I’ve got numbers. We’ve got Sebastian kubek, who is a senior principal researcher from from Microsoft, and he says that we’re getting huge like 1,000% increases in in performance from AIS, if you give them what he calls, textbook quality content. So if you’ve structured it, you’ve put in all the headings, you’ve got indexes, you’ve got tagging, you’ve got glossaries and vocabularies, like what you would get in a school textbook. If you give that level of content of AI. That’s where you get a quantum leap. 10 times better output. So just thinking I’m gonna throw all my unstructured stuff at this thing, and I’m gonna have a great super AI engine. That’s just going to figure it out for me. Even the biggest companies in the world are saying, that’s not the way to approach this. You need to think about knowledge modeling. You need to think about tagging against a knowledge model, and then giving all of that good context and structure to the AI so that it can perform while taking us all the way back to the interns. You know, if you just handed them a pile of notes and images, they could work but if you give them a clear playbook with tasks and processes and reference information, all nicely structured for them. They’re gonna be way more productive.

Larry Swanson  59:24

And if you talk about that task, that’s every author’s least favorite task is after they’ve created a piece of content to tag it, or and so auto tagging must be the biggest boon to that community of all time. But you know, as you’re talking about that, the kind of back to the intern thing to the the there’s so many tasks that you just articulated in there are possible kinds of tasks associated with like translation and localization like that human thing, I guess, a lot of this is getting at like how much better the models are getting, and how much back to the very first question about how these unreliable computers and now you talk about injecting knowledge modeling into this. And anyhow, how, how are you seeing this like you’ve been you’ve been on this about a year and a half now. Are you seeing like progress in the maybe not reliability, but the trustworthiness of the end result? You know, based on a bunch of this process, stuff that you’re helping enterprises with, as well as the technology itself? What, how quickly are we coming to like, really, you know, like, the optimal benefit of realizing AI is benefits.

Noz Urbina /UC  1:00:33

Okay, two things First of all, I’m about 10 years into this. So I’ve got quotes that go back to when we were working with one of Europe’s largest banks, and they were implementing their chatbot and this is 2016. And they were no this is the first chat bots in the world. And it was just major, major, major corporations, banks and so on, could afford to do any chatbots and any artificial intelligence. And back then they said, the number one thing that improved performance was taxonomy. Tagging the answers that the chat bot was looking up, that was the easiest, most effective way of moving the needle on chatbot performance. This is all way before ChatGPT and so on, and it’s continued to be true. How far are we from achieving the like, the dream like the benefit? I always try to remind people on every AI you’re seeing and every AI tool that’s on the market is the worst one you will ever use all of them. This so if you’re familiar with Moore’s law, it’s the idea that computers are getting twice as computery every 18 months, so they get twice as much memory they get twice as much network speed, they get twice as much processor speed, they get faster, bigger, etc. That was already very difficult for humans to keep up with. AI is moving at 10 times per year. So that’s 1000 times a decade. So the the models that you know, we are on ChatGPT for a tragedy to be GBT one. I don’t remember how many years ago that was but if you if you trace it back, we are seeing you know 1,000% increases on a regular basis. So we are so far from reaching the limits. I was just telling you about the reports from Microsoft research that are saying about what they’re getting 1,000% improvement from now. So once that becomes commoditized and accessible for everyone and you get intranet level effects of wherever one is contributing ideas and open source from which scares the hell out of me, which we can talk about that later. But once all of these things are accessible, then you get a virtuous cycle that you get more and more powerful applications. Once you have better and better tools with which with which people can build better and better applications. So we’re we’re just at the the we’re just at the very very beginning the the single celled organism of AI

Larry Swanson  1:03:16

today, now and apologies for forgetting your earlier involvement in this I was just thinking to credit ChatGPT Crazy apologies for that

Noz Urbina /UC  1:03:23

which is hard you know they have co opted the conversation. Me journey ChatGPT It’s they’ve taken over the concept of AI in our minds that even people who were working with the older ones, I can sometimes do it I find myself saying AI when I mean, near the current generative AI because that’s just it’s becoming part of the language.

Larry Swanson  1:03:43

Let me ask you quickly about that. Because how can we de propagandize our consumption of this information? Like I think you’re right that the journey and ChatGPT and all these have taken all the oxygen and out of the room, and kind of like if not hidden, you know, sort of subsumed some of these other intents and purposes and useful things that you’ve talked about. Any tips for hacking like the the the community, like all the social media, all the news about this how can we how can we best discern the true underlying facts of the situation?

Noz Urbina /UC  1:04:16

Well, if it depends on your level, so it’s not like there’s one podcast I can recommend or one book or when anything, because you know what I would recommend to you, Larry, would be different than what I would recommend to a lot of my clients would be different, you know, depends on on your level of understanding of computer concepts, you know, information modeling concepts. So, I, I would say that I always come back to these reframing ideas so, what are the differences between regular computers and these crazy unreliable generative API’s that so the hacks are realized that it’s an invaluable thing that needs this kind of guidance. We’ve been through that the 10,000 interns and so on. Practice, so, you know, you learning to speak to the AI is something that, you know, it’s a skill set the same way that learn to confuse a word processor or becoming a good Googler you know, it’s a skill that you, you, you learn, so, play with them, try to get it to do things and then the remembering that this what do we say? I say AI enables us to all to be programmers without coders. What the heck does that mean? I don’t know if I how well I explained it the first time, but it has to do with when you were coding you have to figure out the solution to the problem. And then you write a special code that allows the computer to do that. Repeatedly and quickly. When you’re a higher level programmer, you’re saying what do I want my thing to do? What are the outcomes I expect? What are the inputs I can give? What are the outputs I expect? And that is what you’re doing with an AI don’t try to tell it line by line thing by thing what to do, but give it like I was saying, I’m looking for a LinkedIn post that has these qualities now go for it. And then the last thing is brainstorming with AI that I found the AI if properly you know I know some prompts people can hit me up for some good prompts to make this really really fly. But getting the AI to just blue sky you know colored markers to eat sticky tears, sticky notes on the wall kind of brainstorm is incredible because they come up with stuff at such a volume and speed that even if even if 70% of it’s either cliche or unusable, you get an amazing amount of output. And so I have found medical doctors, people who are running teams, who have a pharmaceutical training background, they don’t have any real technical aspect of the job, but they sit there and kind of play with things. How am I How am I gonna structure this presentation? How might I talk about this topic, you know, and just kind of throw out those blue sky questions and you will start to start to learn how to leverage this in your in your day to day work, get get get it’s 20 bucks a month, get it get a ChatGPT subscription and start playing with it even on your phone. And and that will be valuable education for the rest of your career.

Larry Swanson  1:07:31

Yeah, that notion, especially brainstorms, that’s the classic generativity, I use your right that’s like that’s what they’re built for. But you know, brainstorming just one of many activities in a modern office and you’ve talked a lot about, like, the GPT customer GPT road for like persona. Development, and talks a lot about like auto tagging and all this stuff. There’s this new category of tooling that I’ve heard about that some of the big tech companies are doing around sort of teaming these, you know, turning a bunch of GP T’s and prompts and things into like actual agents where you can do something I don’t know if this gets I don’t know if this transitions from tasks to jobs, but it seems like it’s on the way there. Can you talk a little bit about that? Yeah.

Noz Urbina /UC  1:08:14

So Sam Altman said that you know, there’s, they can do tasks about jobs and then within a few months he’s talking about you know, the one person was it billion dollar trillion dollar company?

Larry Swanson  1:08:27

Yeah, one person billion dollar guy. I just read that post the other day, he’s Yeah,

Noz Urbina /UC  1:08:31

so yeah, so is that a contradiction? Kinda. So, what it’s kind of saying is, if you could, depends on the definition of task once task becomes complex enough, starts Yes, it does start to rub against what a job is. But so the idea of an agent is an AI that is not just producing content. So it’s not an auto tagger. It’s not a it’s not a translator. It’s not the older kind of AIS we had, and it’s not the generative ais that we have today. Thinking about is like a super Alexa or a super Siri is where we think about an agent or something that can actually combine search engine plus, actually personal assistants so find me the cheapest flights listed me 1010 Great places that to go in in Greece. Think of a five day itinerary for me, that can give me lots of rest breaks and time for walking on the beach. Plus gourmet food. And then, you know, book, some hotels or options for me that have a refund policy that are in within this price range. You know, you cannot do that with any tool in the world right now. Or there are there’s a couple of ones that are trying to start to be able to do it. But that’s where we’re going the idea where it’s not just a producer of content, but an executor of these tasks. So you can tell it, think of an itinerary, go online, look up the flights for me. And if you trust it enough, buy it for me or reserve it for me and come back to me with with these checkpoints. That’s kind of an agent. And what’s you know what I’m playing with now and when my spare time I haven’t gotten a chance to put this into production use yet. Is is the idea of agent swarms. So I don’t like the term agent. Swarm. It really makes me feel it’s really sounds like a terminator kind of matrix. Awful. Sounds like a bad thing. Agent swarm. I like to talk about the your AI ecosystem. So if we have auto taggers over here on auto translators over here and and transcribers over there, how do these all these kinds of AI organisms fit together into an ecosystem and what’s the lifecycle of a task as it moves through it? So what I am trying to develop a content plan, develop a user interface design? Where does AI fit into all of these? And can I have an AI over here that automatically sends certain information, let’s say on a regular basis, or when I asked it to, to this other AI over here, so this one does a transcription? This one does the auto tagging, and this one does the insight report. You know, and then I watch will come in and go I want to look at the Insight reports for for this region. And I’ve had three AI’s that have kind of collaborated to get me my Insights report. And then what I have is that I give that to my, my content planning AI and my UI design AI that will start to build user interface wireframes and, and content plans, you know, so it’s looking at how do we get these these AIs talking to each other. Rather than thinking I’m just going to create the super AI that does x thinking of, again, back to this teaming idea of how do we get multiple ais that feed each other and and continue to move ourselves higher and higher in the workflow hierarchy, which is where this one person billion dollar company comes from. So you are the CEO now, so you don’t even you’re not even a team leader with 10,000 Interms. You are a team leader. You are a CEO that has 10 senior managers who each have 10,000 interests. Now, what do you do? So that kind of mental mental model of okay, I’m currently here, how can I leverage my interns, but then how can I put myself a little bit out of a job and take a higher level, more strategic approach to what I do so that I’m only doing essential tasks?

Larry Swanson  1:12:43

The way you say that like, because that’s always been it’s often been a goal and throughout my business history is like, How can I put myself out of a job and go up a notch and and by intelligent employing these these new agents and grouping them together? You just you kind of create a levels beneath yourself that can push you up the the, the economic food chain, yeah, yeah,

Noz Urbina /UC  1:13:04

exactly. So I think that’s what everyone’s got to do is think of think of how could I, you know, if I had all this power, what would I do with it? Yeah, I, you know, it’s kind of fitting. This is our first one of these because it’s, it’s got some examples here, but it’s still nicely abstract. There’s a lot more a lot more that we can get into. Also, you just, Oh, I’m

Larry Swanson  1:13:25

sorry. Go ahead. Yeah. And

Noz Urbina /UC  1:13:27

also I do want at some point talk about the, both of the lower level issues, like more examples and more practical stuff, and also the higher stuff about, you know, the social impact, ethical impact, moral impact of these tools on our on our world. That’s

Larry Swanson  1:13:41

what I was just gonna ask you about, because when you said great power, I kind of alluded to that with extending the pyramid and you’re always Aha I have my business empire. And I’m like, But with great power comes great responsibility. And there’s a lot of I mean, you alluded to a couple like ethical, you know, like, kind of bias issues in data and, and a lot of those ethical issues maybe, I mean, that’s probably a whole other episode and good thing. This is a series but, but you want to talk a little bit about that kind of you set the table for subsequent conversations around this. Yeah.

Noz Urbina /UC  1:14:08

So I think so we’ll come back to the beginning of the very beginning of the episode. I think this is the biggest thing that’s happened in my lifetime. Full stop. So I think that there it’s, it’s the most powerful thing that happens is that has happened and also therefore the most dangerous thing. And I think the implications of that stretch far beyond my usual talking points of how can we get better experiences out to customers and stuff like that? So I do want to talk a little bit about if we are, if we’re gonna be involved in in having this level of power. If you look at the influence that that you know, the big tech has on global culture, and politics. That’s pretty huge. You know, I’m sure some of the listeners here will be from you know, a lot of our clients and a lot of our listeners are from Fortune 100 companies that are more powerful than nations today. You know, the Google is more powerful in most than many countries on the planet, as his Facebook as his as his apple, farm, similar pharmaceutical companies, etc, etc, etc. So with this great power, how do we wrap our heads around the the impact of what we’re doing? Like the idea of open source AI, which which makes me shiver to my boots? What should we do about that and what is going to happen? What are the risks? I think we can just say, Oh, it’s really powerful. And people come up with things like, Oh, what if somebody makes a bomb or a chemical weapon? I find that to be deeply boring. And I’d like to talk about what I think are the real risks, and the things that will sneak up on us. But I think as you said, that’s a whole nother episode. But I think there’s a lot with all this power. There are a lot of risks that we don’t quite comprehend yet like to explore them as best I can.

Larry Swanson  1:15:58

Yeah, I’m looking forward to going deeper into that for sure. Why not? Anything? Anything last before we wrap up anything you want to put a little cap on the end of the show? Yeah, well,

Noz Urbina /UC  1:16:08

people please get in touch. I’m so interested in talking about this stuff lately. We have our our RAUX methodology, which we’ve been chopping out there. That’s RAUX. So if you look for RAUX or Urbina Consulting you can find a little bit more about the particular methods you know, prompts and examples that we can use that you can use and otherwise, what are the challenges and goals like what do people want to do? I’m very interested in use cases. So anybody who wants to reach out about that I’m, I’m always interested in talking about how are you envisioned using AI

Larry Swanson  1:16:45

Right. Cool. Well, thanks so much. This is a fun conversation. Absolutely.

Noz Urbina /UC  1:16:50

Yep. Thanks for doing it. I’m looking for the next one. Nice.

Larry Swanson  1:16:53

Likewise, for sure. All right. Thanks,

Noz Urbina /UC  1:16:55

everybody. Thanks.