Podcast Ep. 19 – AI, ontologies, and componentized content w Yanko Ivanov, Enterprise Knowledge

In this episode, we hear from Yanko Ivanov, Enterprise Solution Architect at Enterprise Knowledge. 

He and Noz Urbina define and then differentiate between the often and easily confused concepts of

  • domain models
  • content models
  • ontologies, and
  • knowledge graphs.

By zooming out and covering all these terms together, they illuminate how each contributes to different business scenarios by addressing different aspects of structured content and data.

In this podcast, they are also discussing how all these concepts connect with AI, specifically LLMs (large language models) and their potential issues with reliability, traceability, and provability.



Join us ONLINE.

Super-early bird pricing is available from now until February 28.



In this episode, you’ll hear why:

  • Domain and content modelling are different, complementary, and essential in organizing information effectively.
  • A domain model helps build content models by establishes key concepts and relationships.
  • Formal ontology and knowledge graphs enable the integration of structured data with content, forming a unified platform.
  • Large language models have potential issues with reliability, traceability, and provability, but could connect with ontologies to be more reliable.

About our guest

Yanko Ivanov
Yanko Ivanov
Enterprise Knowledge

Yanko Ivanov is a Senior Content and Solution Architect at Enterprise Knowledge, LLC. He is an expert in strategy, design, and implementation of knowledge and information management systems.

Mr. Ivanov has designed and implemented a number of enterprise content management solutions for organizations in the private and public sectors. Mr. Ivanov is a certified Project Management Professional (PMP) as well as a Certified Scrum Master (SCM).

Blending his technical background with his knowledge management strategy experience, he specializes in gaining a deep understanding of his client’s knowledge and information management needs and developing practical roadmaps to achieve their KM vision.

Full session transcript


Noz Urbina  00:02

Hello, everybody. I’m Noz Urbina, host of the omnichannel podcast by OmnichannelX and founder of Urbina Consulting, who is brings you omnichannelX and all our delightful resources. I’m here today with Yanko Ivanov who works for a company that Urbina Consulting partners and collaborates with on a regular basis with Enterprise Knowledge. Could you tell the audience a little bit about yourself?

Yanko Ivanov  00:26

Sure. Thanks. Nice. Thanks for having me today, talking with you. My name is Yanko. Even though I’m an enterprise solution, architects with enterprise knowledge, I’ve been doing this for quite a bit, but content management, content, delivery, semantic solutions, those pieces have always been part of my career from the very beginning, both from the technical side of things as well as to down to the business implementation, business goals. In the most recent stage of my career, I’ve really been focusing on the structure content and semantics how these areas overlap. And I’m really excited to talk to you about it. 

Noz Urbina  01:02

Awesome. Yes. So Yanko and I, we work together, we have very interesting Venn diagrams of our backgrounds and our and our skill sets. And then so we do. And we also, I think one of the things interesting is, we both work on that boundary between the technical people and the and the business people. So they’re interested in how the business, the business is going to derive value and how they are going to operationalize solutions, but we do have some technical background. So what we’re gonna be talking about today is various ways, essentially, at the highest level, various ways of organizing your content. So we’re going to talk about domain and content modeling, ontologies, knowledge, graphs, and of course, what no conversation today is complete without some mention of AI, and large language models, aka, chat, GPT. And, and the like. So to kick us off, I want to start with a topic, it’s very interesting for me, and we’re getting a lot of demand and interest these days. Domain and content modeling. So I’m gonna define those. And then we’re gonna see where does domain content modeling and, and where do things like ontologies, knowledge, graphs, and all that other stuff that’s coming up, begin. And that’s, that’s what we’re gonna switch over. So to start with, kicking off with a dog domain model. So when we’re doing domain modeling and projects, what we’re talking about is figuring out the relationship between all the key things in a subject domain. So there’s a lot of that can be sometimes confusion between the word domain like, I’m speaking now in terms of a subject, like a domain of expertise or domain of knowledge, and domain like a URL. So no, we’re not talking about web domains. Although you may use this for for for your web content, we’re talking about in the sense of like a domain of expertise or domain of knowledge, what are the things that we’re talking about? So if we were in the pharma domain, we would have, we have drugs and prescribers and patients and conditions and side effects, and, and so on, and so on, and so on. So the reason we start with domain modeling is because we’re building the key and important relationships between the key concepts in this area. And, incredibly, this is often not done. When we’re doing content, people just kind of launch into projects, and start creating content, but have not thought of the underlying domain and how the different things relate to each other. And there’s a lot of benefit that can be derived. Because when you take your your base classes and concepts that are in your domain, and you start to extend those and say, Okay, well, what do we want to say about these things, then your content model, which is the your content types, and the structures of your content, and how they are related to each other, will be rooted very much in the reality and share the mental model that is existing in the heads of your audience? So I’ve been content modeling for years, and I used to content model from documents. So I would model and I would go into an organization say, Well, what are we trying to create here? These are the documents that you’re doing. Okay, so let’s model these up, so you can create them more efficiently. domain modeling was very, very powerful, because it started to look at what, before we start modeling to reproduce the status quo. What is the underlying architecture here? Like, what is the logic behind all this? How do people think about it? And so, getting into domain modeling and journey mapping to really understand the information and experience space that your potential content consumers and content creators are living in, before we start saying how we should package things up into documents? That’s been very, very poor. awful. So I’ve kind of already alluded to a content model. But let’s say so we’ve named all our things. So if we were staying in pharma, we’ve we’ve named our drugs or side effects, things like studies and participants, and all these different things and how they all relate to each other, how they’re what the rules are around those relationships. Now we get into, what are we going to say? So we have, for example, a study may be talked about in dozens and dozens and dozens of documents. It’s one thing in the real world a study, you know, clinical trial, it is it is a study on on drug and how it works. But how we may talk about it when we’re responding to a patient inquiry, how we might talk about it, when we’re speaking to a regulating body, how we might talk about it when we’re speaking, when medical professionals speaking to you know, another medical professional who works for the form of brand, all of these things are different contexts, that might require different content types that may or may not share information. So you’re going to have consistent things like study names, participants, design, geographies are the other kind of key finding. So there’s lots of stuff that you’re going to you’re going to share. But how you talk about them may be encapsulated in different content types, depending on the need, and the use case, and so on. So that’s how the content the domain model relates to the content model. Once we’ve got a content model, then we can create this nice what we call semantic structured content. I’m constantly use the metaphor of Lego, you know, if you want to Lego a fi, your content, and actually build this reusable components that you can take apart and build it up different things for different people, or do personalization, all that good stuff, then having this meaningful, aka semantic model behind it is really powerful. You can bake that into your different management and delivery systems, so that they all understand what’s being operated on, they can deliver much more, much more, much more personalized, relevant, localized, channel appropriate, et cetera, et cetera, experiences. So that’s my kind of area of expertise. I do that all day in day out, figuring out how we build good day models and content models so that people can author deliver this great content. And in the last six or seven years, I’ve been collaborating more and more with people like Yonko, who do ontologies and knowledge graphs and artificial intelligence systems, recommendation engines, and so on. So you don’t go. If the domain model is kind of like our design picture, it’s our it’s our, it’s our visualization or conceptual visualization of what our domain looks like, and what the relationships are. Can you take us into where things like ontology and feel free to just throw in some taxonomy stuff in there? What’s then the next step? And what’s the benefits of going from having a model to having a formal ontology?

Yanko Ivanov  08:09

Yeah, this is really interesting, because what you mentioned earlier is having all the content or information around a, let’s say, a study, or, or even a drug, and being able to present it in various different ways. Now imagine what if we can actually automatically integrate not only the content of our study or drug, but also actual data? It could be standard results, it could be specific, specific data points for the drug, or the results or the study that are currently local, stored in a database in a very structured data source data system. When we’re talking, talking about content model, this is content we have, whether it’s structured content, whether it’s this rich text, and so on, it’s actual text, mostly right? We can have images and so on. But the ability to integrate that text with the related data is where a solution like a knowledge gap really comes in. We can get into into more detail about knowledge paths, but the idea is that it can connect is basically a connective tissue, issue all the connective layer between various different elements in your enterprise that produce one cohesive view of whatever entity you’re looking at, whether

Noz Urbina  09:28

that’s the ontology that’s a knowledge graph, or both. That’s a knowledge graph, where

Yanko Ivanov  09:31

we’ll get to the knowledge graph in more detail, but how do we get there? You also mentioned the domain model. So the domain model, like I said, it’s it’s really the representation of our world, whether it’s pharma, whether it’s a manufacturing company, whatever that is, what is our world? What are we talking about? What are the specific entities that we refer day in and day out? If it’s a product, what are the components of that product? What are the attributes of that product, and so on. That’s the domain model. How do we take it into an ontology? First of all, what is an ontology? Well, you mentioned taxonomy, let’s start there, taxonomy. It’s a hierarchical view of metadata, right? You have the tree structure of the different attributes, different different metadata. A good example is country, right? You can have a list of countries, you can have a list of colors, and business units, and so on. So this this, all of these items, compose your address. And some

Noz Urbina  10:28

I always use the example of, of region. Yes, in the sense that country symbol because there’s like a list of countries, right. But it actually in practice, when companies are getting a bit crazy, it’s like, okay, well, what what, what are our regions? What’s who’s, where’s North America? Where is? Do we want to address the Nordics? Do we want to address Scandinavia? Do you want to address Europe? Like, how does the world break down into groupings and sub groupings of of countries? So quite an interesting taxonomy channel challenge?

Yanko Ivanov  11:02

This is a very common one. And like, how do I find a document that’s tagged for India? For example, when I’m searching for you, Mia, like, how does that work? Right? So

Noz Urbina  11:12

did he meet me as Europe? Middle East Africa?

Yanko Ivanov  11:16

Yes. Yes. So this is a challenge for taxonomy. So this is where we start talking about not only taxonomy, but more getting into an ontology. So let’s talk about ontology. And ontology is in a very basic, simple terms, you can think about it as a taxonomy in 3d. Because taxonomy is your parent child relationship, the tree structure on ontology, it really allows us to now provide different types of relationships between these things in our taxonomy. So if you had a list of countries, and then a separate list of regions, now you can connect all the countries in a specific region, outside of your parent child relationship. So it really provides a different layer of different relationships,

Noz Urbina  12:02

or the region country thing is an ontology example.

Yanko Ivanov  12:05

It can be an ontology example, I guess, right? Because this isn’t how it’s implemented. But yes, absolutely. And this is a very, very basic example of an ontology we can talk about. So let’s talk about a drug, right? In pharma, you have a drug, and there is there are so many different elements around the drug. So getting back to the to your domain, you had a little circle or a drug in your domain, domain picture. Now, we need to be able to define how the string relates to other things, other entities in that domain, like studies, like participants, results, in status, and so on, but in a more granular detail. So when you’re thinking about the domain, it’s a very high level conceptual representation of your domain of your world. When we start getting into ontology, this is something that we actually can take in implementing our graph database. So it takes a domain, and really gets into the detail how specifically these things relate. Maybe even a little bit of engineering, just to be able to be applied to the data in a more efficient way. But an ontology is, ultimately is a representation of entities, their attributes, and how they relate to each other. That’s the data model for a graph database. And once you apply that data model into the graph database, and you have the data integrated with the graph database, that’s where you have a knowledge graph.

Noz Urbina  13:38

Okay, so there’s a couple things. One is that I recently was talking to somebody and I managed to Google this in the interim. But I realized that I’ve been saying graph database for years, not knowing why it was called a graph database, specifically, as far as I could say that it might as well call it a strawberry database, for as much as much benefit as that word graph is adding. And I googled that, googled it and eventually found out why it’s called a graph database. And, and they actually, there was actually somebody going back to the 80s. Talking about how unfortunate was that we call them graphs. Because most people don’t think of graphs that way at all. So when we think of the word graph, we think of graphic or graphical something visual. But it’s a very nerdy mathematical term in the sense that it like if you if you think back to Geometry, you have functions, like you know, I don’t even know enough math to come up with an affliction of example, off the top of my head, but you have something times something for example, and that I don’t make a line. You can graph that function. Yeah. Also, you have when you have a system of these different things and their connections, you can Talk about mathematically about how you connect between the different things in that system, and what falls within a certain group, and so on and so on. And unfortunately, that’s also called a graph. And that’s why it’s called a graph database. Because we have this idea of things and the relationships, you can trace the relationships within it. Hence, graph database, one of the least intuitive two terms in, in technology, I think,

Yanko Ivanov  15:28

agreed when I got into this. When I got into the graph database in graph, knowledge graphs, in my mind was like, Wait, isn’t that actually a network? It’s really it is a network of things, how they connected to each other. That’s what it is. But yeah, there’s.

Noz Urbina  15:45

So there’s that I want to also use another example, just because we’ve talked about farm a bit. I know, we do a lot of work from in pharma. But I want to give people another example of an ontology that I think everyone can relate to, because we’ve all owned some sort of complicated thing at some point. So if we had a manufacturing product, yep, we can have graphs. Or we could have an ontology for example that you have, you have a product, and that product has parts. And you might need spare parts for that product. And that’s those spare parts have certain availabilities, like, where they’re available in what countries, these availabilities variations, there’s all variations of the part that you can get, they all will have their price they will have, whether they’re in stock, or not all this different information about a part. And they will have, they will break down. So you’ll have like parts, you think of a car has an engine as a part. But an engine is a really complicated thing breaks down to lots of other sub parts, so that you have an actual, the whole world of manufacturing, you can think about as another example of where you might have an ontology of the relationship between parts and spares and regions and, you know, engineers that are certified to install that kind of part. And all these times types of relationships,

Yanko Ivanov  17:07

materials as well, like, what is this this part made up? What are substitute materials? What are the effects of that? Exactly, it can get really complicated. But it’s fascinating how it can actually describe the relationships between these things. And then the fact that this is now captured in a machine readable way, can then allow us to really implement all kinds of use cases on the front end, whether it’s a recommendation engine, whether it’s just the website, Hey, um, my car broke down, I need this part, making sure that that part actually fits my car.

Noz Urbina  17:42

Yeah, I think that you dropped the word machine readable in there, which was one of the other things that I’d wanted to bring up, which is that so the part that I’m usually involved in, is figuring out that domain model, but a domain model can live on a slide. Yeah, you can be a mural board, it can be in a, you know, in mind mapping tool, you can do domain models, kind of wherever you want. Because it’s figuring out the things and the relationships. And you do that by talking to people by doing interviews, and getting subject matter expertise, experts in talking to clients and understanding the domain, subject area that you’re talking about. But you can’t that doesn’t live in a database, it’s a powerful thing. And you will write better content and you will, you know, you’ll you’ll be able to make better insights and do things. But it’s not machine readable. You can’t use that to power anything. So So what are the kind of some of the kind of things we can actually power? Once we’ve got this machine readable? What can we do?

Yanko Ivanov  18:45

Yeah, so this is where the ontology comes in, right? Because ontology is machine readable? It is the that data model that tells the machine saying, Okay, well, if you have a drug, or if I have a part, that part can have all these different attributes, or it can have can have all these different relationships, it can be related to a different part to different products, and so on. So now the machine can really get the algorithm can really look at a specific part and tell me, okay, these are the related items that are explicitly related. These are the attributes of this machine, the this part, and actually presented to me so as is an actual true answer. And I’m specifically focusing on this because we’re going to talk about this in a second or a little bit later, when we talk about charge up, for example, but this is curated information. This is actual true fact. And then what kind of things we can do with us. recommendation engine is the very basic one, I mean, recommending so if we’re only talking about content, say you have a bunch of publications, articles, and actually we’ve implemented something like this for one of the development banks here based in this area. You have a bunch of publications from different sources, then we need to be Well to recommend, based on who I am based on what what areas, I’m focused on my seniority level level in the organization, what kind of articles or publications are relevant to me. So, because we have the articles, because we haven’t the metadata on those articles, because because we know who I am, then we can make that matching and recommend content. From a parts perspective, we can again recommend the parts that are very specific to you, whether it’s a monitor, whether it’s a car engine, whether it’s pharmaceutical products, or manufacturing, medicine, America’s product that you need for your hospital machine, for example, knowing all the details, all the attributes for that machine that you’ve already purchased, and what’s available out there, you can then make that match and say, Okay, well, these these are the the specific parts, components, consumables that you need for your specific machine, rather than, hey, here, this is big manual, that’s 500 pages, figured out what you need that,

Noz Urbina  21:05

and this huge catalog that goes with it. And you need to connect the two. So we’re, we’re seeing that, in manufacturing, where there’s a lot of demand for this idea of the the universal, personalized hub, sense that I want to go in. And this sort of this would apply to any complex operation, but it’s very, very obvious and clear in manufacturing, I’m going to repair this machine at this hospital or this, this airplane, or this, you know, Formula One racecar, whatever it is. And I want to know exactly the parts I need. This is the spares I need etc. And if I want to, if I don’t have them, I want to be able to order them while I’m reading through the procedures. So I’m getting I’m, I’m about to start, I’m about to start a job. And that because something’s gone wrong. And I’m going to ask him to tell me right at the beginning, you’re going to need these things. Do I have those things? No, I don’t? Can I order those things? How long will it be before I can get them, etcetera, etcetera, sort of so separating parts database or parts, ordering manuals, knowledge base, how tos all that and bringing it all together in one world.

Yanko Ivanov  22:21

That’s, that’s where the bringing it all in one, you wouldn’t have to copy all the data into knowledge graph, but we connect to it, we point to where things are. And that’s that’s the power of it.

Noz Urbina  22:32

I think we also should I should, as a content model guy, I should bring this back to how does this relate to content types. And so if we had a if we have our parts, and we have our spares, or we have our, our drugs, and we have our studies, I mentioned the different people will want to pull together content in different ways. I think people often get confused because it sounds like everything lives in the Knowledge Graph. And then you then people go, but why do I need a content model? What Where’s like, don’t I just put all my stuff in the graph. But we’re still you can you could be writing a paper for one audience, like a presentation, a certain audience, or you could be writing an email newsletter. Or you could be writing a formal response to a query, like if someone emails you in and you need to somebody have a question that you have to write them a response, you may mention many of these things, you might bring up lots of these facts and lots of these pieces of data and you want them all to be accurate. You want them to be live linked to the source of truth. But the content that you write about them will be very, very different types, depending on the situation like your newsletter content, although it may mention the same facts. And in different ways, you’re creating content context, when you in your in your content model saying okay, well, I want to write a newsletter, or I want to create a brochure or I want to create a manual or I want to create a knowledgebase article. So those content types are things that live. It’s a separate layer, from your ontology of all the data and facts, you want to talk about your data and facts that you’re going to talk about in lots of different ways. And you’re going to talk about it lots of different channels a lot. And that and from lots of different departments in your in your brand. So whether it’s, you can do this in one department, but at the end of the day, you’re going to be producing lots of things. At the end of the day, you’re going to be producing lots of different types of content that you deliver. And those different those are defined in your content model. And they will they interlock with your data model in the ontology. So these, they’re both going to be done separately, but they’re entered together their chocolate and peanut butter,

Yanko Ivanov  24:56

also Okay, so there are two general ways of thinking about this knowledge graph or an ontology. Let’s talk about ontology and Quantum. The Quantum model can be really loosely integrated with a knowledge graph or an ontology. Because yes, you have an article. And the article is mostly text and not as structured as an ontology, for example. So an ontology that describes your products or your your drug, or whatever that is, can say, CERP, here’s the datasheet, or publication article for that drug. And that points to your content model to your content item in the world. So that’s an explicit relationship. But it the overlap between data model or content model and ontology, it’s loose, not quite there. Whereas when you’re talking about a little bit more of a structured content, that overlap is a little bit more of an overlap. So for example, you have a product and you want to provide specific images for that product that are approved images, specifically, for regulated industries, like pharma, this is this is huge, right? You have to use the approved images, uproots information about a product otherwise you can get in real trouble. So if we’re talking about a drug,

Noz Urbina  26:18

and also just for marketing, you want to use the approved the right approved stuff, you don’t want to be selling pictures into into Sub Saharan Africa with a bunch of Swedish models,

Yanko Ivanov  26:28

right. So that that’s actually another good, good, good problem there. Alright, so we’re talking about a product and you wanted to have specific images, you have to have you want to have the approved description of that product? Well, do you want to store that description in a graph database, relational database, or you want to store it in your content management system. And that is the approved product description that may have a short description, Title Tag, tagline? Multiple tagline is depending on where it’s displayed, and so on. So you have

Noz Urbina  27:02

multiple legal disclaimers. They list?

Yanko Ivanov  27:05

Absolutely. So from a content perspective, your product model will have content that says okay, well, here’s the product. And I have the title, the description, short description and everything else. Well, that same model is represented in your ontology so that they’re consuming applications know exactly what what description should to use, what image to use, or what legal disclaimers to use, depending on the region, depending on the use case. So the content model and ontology really overlap in that sense, when your content is restructured to reward structure. And we want to provide a lot of personalization and customization will the delivery end. Yeah,

Noz Urbina  27:48

so what we’re doing projects these days, for example, if we keep with the farmer example, we have people who are talking about this stuff, literally in PowerPoint. And they’re getting tired of it, because they don’t have the same facts will appear in many, many, many, many PowerPoints. And they and for example, you’ve got a citation, and you have to copy and paste the reference, or you got those facts at the bottom of every single slide that mentions them, the citation is not a thing, you know, you can’t it doesn’t live anywhere, you can’t universally reference them. If you if you got if you add another one to a slide, it doesn’t automatically number them for you. So whereas if we move to a much more structured way of working, we can create, we can create slides. And we will and we will make sure that all the citations are up to date, we will make sure that all the graphics are being used for the for the particular region that you’re delivering that that stuff to. But you could also express the same thing out as a website. With the search box. We’re gonna phone. And you can’t really do that unless you’re actually structuring your content. If at the end of the day, you have this fantastic search engine or recommendation engine that goes here’s the best content and they send you a PDF, or slideshow that is not going to be able to deliver you the experience that you want to do in this decade and day and age. So this is a bit based. Yeah. Yeah, a lot of copy paste. So this is, yeah, so what either you copy paste like crazy, and then review each copy so that you can have your different outputs that you want. Or you just move to a different base way of working, which is this structured way of working on the content side, and a structured way of working on the data side. So I think you’ve you’ve nicely teed us up here. So we’re taking the merging of structure and data of content and data in this unstructured ways. Which takes us to the word on everybody’s lips, which is AI chatbots. And specifically for some reason ai ai and chatbots has kind of kind of become synonymous with chat GPT category, they kind of pulled the Google in the way that you know now people Just say ciao GPT when they really mean like large language model chatbots

Yanko Ivanov  30:05

it’s amazing. This has happened only in a few months. Yeah, four months now. It’s just, it’s phenomenal how quickly it grows up.

Noz Urbina  30:14

So how does this interact? How does this world interact?

Yanko Ivanov  30:19

Yeah, I’ve been thinking about this. And actually, my colleague, Chris Marine, who recently published a blog about charge GPT. And whether it’s ready for enterprise, it’s a fantastic blog. Actually, I read it recently in a really good time.

Noz Urbina  30:34

Okay, you got to send us that’s all put it in the show notes

Yanko Ivanov  30:36

are definitely well, I have some thoughts from there. We’ll touch on that. But chat GPT is still young. It’s the technology is very young. And we’ve seen it has tremendous power, and tremendous potential, but it’s not quite there yet. Right? It can produce results that are actually not correct and not correct. And they’re in an enterprise environment. That’s not, you can’t do that, right. It still requires a lot of supervision. A, it has a huge potential in terms of content generation, with supervision. That’s, in my mind, if you’re generating content, do you have somebody to expert leads to go through and update it and so on? Absolutely, there are use cases where this, this technology will fit really well. But at this point, it’s still early on for pull on enterprise adoption, again, depending on the use case, in their use cases where it can really fit nicely. But from regular enterprise level, still around.

Noz Urbina  31:41

So let’s let’s, let’s do a quick definition before we get too large. So I mentioned large language model chatbot, etc. So what if you haven’t heard already, if you’ve been hiding under a rock? So chat, GBT is a what’s called a large language model based chat, bot or chat interface. So it’s, it’s essentially done in the same way that autocomplete has been working on your phone for for a decade or so. And now what you see in, in Google, you have auto responses, you have these like, quick, if you use Gmail or Google Suite, I’m sure probably Microsoft as well. You, you get these kinds of sentences generated and actually read the email. And it’s kind of been creepy, even for a few years now. How, how fitting the responses that it generates are, are like a really will pick out, like, it’ll, it’ll send people condolences for bad news, and it will, it will, you know, will notice that something’s supposed to be funny and offer you with Oh, yeah, that’s really funny, as a response. So these, this ability to understand the language and pull out an appropriate string of words has been here for a while. And the large language model is basically just times 100. Doing that, so that instead of saying, here’s, here’s a probable word, or sentence, we’re getting up to long form answers, you know, essay length, answers where it’s, it’s basically, it’s a probability engine that says, based on what you asked, chances are, this is what you want as a response. And it generates that from this this enormous language model that it’s created by scanning basically, the internet.

Yanko Ivanov  33:34

It’s really fascinating to watch how they’ve implemented is just out of this world. But yeah, ultimately, I actually saw a colleague of mine was talking about so ultimately, this comes down to, like anchor pressing and pressing the autocomplete button, and it will complete the whole setting for you. Pretty much. It really, it, basically, like you said, takes everything that you’ve written so far, and analyzes it based on all the incredible amount of confidence already has its learning base, and produce the statistical statistical recommendation of what would be next and and again, and again and again. So it keeps going on that way. It’s It is fascinating, the question. So

Noz Urbina  34:18

how can we? Well, we I’m sure, probably most people who have heard of it have also heard of it the times it’s gone crazy. Yeah. So it’s straight up made up facts. Yes. They actually using they’ve already invented a term for for large language models, screw ups called hallucinations. Oh, yeah, that’s the term that they use now. So basically, you can if you’ll ask it for a list of seven articles about the French Revolution. And it can only think of six it’ll make up a seven. It’ll literally just make up an article and if you ask for more information about that article, it’ll double down. It’ll just it will say it will give you a summary of that article it will give you when it was when it was published, and none of this is real. It’s all All just grammatically. Hallucinations not acceptable.

Yanko Ivanov  35:06

From a business perspective, like I said earlier, yes, not right. They’re not there. But from a technology perspective, I mean, think about this, there is something out there that can just make up stuff. It’s like, wow, it creates its own thing. This is This is incredible.

Noz Urbina  36:10

Okay, so we’re cutting in here, we’ve never done this before we are jumping back in Younker, you’ll, you’re gonna hear my audio a little bit different. And if you’re watching the video, got a headset, now I’m actually in a different place. We, this is the first time this has ever happened. We were in mid flow, talking about this very, very quickly moving industry talking. And we were basically saying, as you’ve heard so far, that large language models are very powerful, but have some issues with reliability, traceability, provability. And to get them ready for the enterprise, the ideal would be if they were married, with knowledge graphs that do have traceability and auditability and are using verified facts and connections. So I’m gonna play you a little bit of audio from an announcement that happened in between when we recorded and, and when we wanted to go live. This is not a promotion of this particular product, but it is.

Yanko Ivanov  37:08

To build copilot, we didn’t just Connect Chat GPT to Microsoft 365, Microsoft 365 copilot is powered by what we call the copilot system, a sophisticated processing and orchestration engine. It harnesses the power of three foundational technologies, the Microsoft 365 apps, that’s Word, Excel, PowerPoint, Outlook teams and more. The Microsoft Graph that’s all your content and context, your emails, files, meetings, chats and calendar and a large language model, or LLM, a creative engine capable of parsing and producing human readable text, all accessible through natural language. It starts with a prompt from you.

Noz Urbina  37:28

So that’s basically it in a nutshell, what we were what we were saying is to make these systems enterprise class, they needed to be connected to the graph. So Microsoft has said, your apps or emails, your meeting transcripts. And that’s fine. That’s absolutely true. But the kind of technology that Yonko and I work on in these projects are not just ambient recordings. It’s not just capturing email discussions and transcript from meetings, although that’s that is very useful. There’s actually intentionally curated and loaded data. Do you want to tell us a little bit about the difference and and how they fit into building solutions differently?

Yanko Ivanov  37:40

Yeah, absolutely. So like I said, it’s been literally a couple of weeks, and there’s no announcement came up at the breakneck speed of this technology’s just amazing. We talked about knowledge, graphs, and charts up team. So talking about knowledge graphs, you said, we’re trying to capture actual facts, actual information that is true, validated, curated, if you will, whether it’s data or actual content as well, knowledge graphs are have that capability of doing that both of them integrating that information. But ultimately, knowledge graphs are a data structure, Data organization structure, you can you can connect all this all these entities and how they’re connected and all that. But on top of that, you need to build a user interface, something that interprets that data. Knowledge, graphs are really good for helping identify trends. For example, depending on the use case, there are so many use cases.

Noz Urbina  38:57

Trends, trends

Yanko Ivanov  38:59

But again, we still need the front end, to interpret that data. So here’s where Chad deputy kind of technology can really come in and play really well. So this is what we were talking about a couple weeks ago. And it’s becoming a reality now, where you you connect the front end of the large language model to the actual facts to the actual database, the organization of the data. And that way, you can start interpreting that information, you can start getting the actual information from from your data, rather than something that could be unexpected when when the LLM is based on the old information on Ethernet. As you can imagine, it’s not curated. It’s not verifiable. Sometimes and it’s not always accurate. We talked about this a little bit earlier. Yes, charge up to four. Again, this was another release that since we talked about this, originally, it’s supposedly better at not making up things, but we still there is still a chance that it can introduce information that is not absolutely accurate. And at a business level, that’s usually not acceptable. So this is where we still need the technology, the actual packs, the the ordering information that’s curated and accurate, and the efficiency of this new technology. You have a lens where it can create the text reviewing contracts, and the different businesses, business cases. So marrying these together can produce really powerful system powerful environment, if

Noz Urbina  39:32

you will. Yeah. And we said, we talked about the idea that Google is going to be hot on their heels, with similar announcements. So they’re off for their applications. And I think it’s important to say that the point of this is not. And this happens often, like the first person to get out there gets mindshare, the first brand get out there. But this is this is not a Microsoft thing. It’s not just a Google thing, we’re going to see this kind of technology be everywhere. This is I think someone could refer to it as the iPhone moment of AI, in the sense that smartphones had been around for years, but then the explosion happened. And they became part of our daily lives. And just, you know, we can’t even remember really life before them. It’s sort of a foggy memory, what it was like before them, that’s what we’re seeing now. So it’s not, everyone’s not going to start using Microsoft, Bing instead of Google. But that these applications are going to become ubiquitous as the way as a layer for all us to allow to do queries and kind of programming things that only technical people could do before. But now if you can express it, in natural language, computers can can do it for you. And that that is going to open up a level of power and efficiency, which is is going to be a leap forward.

Yanko Ivanov  41:46

I can’t agree more. Even yesterday, I was chatting, chatting with some friends of mine. And we were thinking, Hey, can you remember 20 years ago when? What was the work? Like? What was life like? And can you imagine 20 years from now, what is this going to be? It’s just It’s just incredible. And this, especially with the speed that this is developing? It’s just absolutely incredible.

Noz Urbina  42:10

I’ve been I’ve been very sorry. Yeah, I’ve been very lucky in my career, that I was kind of straight out of university ended up in this very tiny community where we were talking about things like we talked about, there was there was LANs, like a local area network. And then there was WANs, wide area networks. And in 2002, we were in a company where we were we were these theory theorizing about. We’re talking about pans, and no people like in the community, we’re talking about pans, you’re the personal area network, where you would have these devices on your body that would be connected. And they would be able to leverage, you know, the processing of one device would do things and devices and your eyes and your ears. And what we what we now call having a Bluetooth headset, a smartwatch and a smartphone. And you know, it just it’s just a normal thing. But all these things have been under discussion. And we’re just now seeing the mass marketability of them.

Yanko Ivanov  42:13

Yeah, technology, the way technology is developed, evolving is just incredible. And like we’ve been, we’ve been talking about this Chad cptn language, large language models. It’s really going to change the way we work the way daily life is really, I mean, imagine right now check, it is requiring a lot of processing power. It’s not cheap. It’s not something that an enterprise can just easily pick up and go unless they’re ready to put an investment in. But imagine in a few years when you have this in your phone right now, it’s it’s again, it’s very powerful. The question is, how are we really integrated so that it is powerful in a useful way in terms of getting the actual information, accurate information that he needs? In the right way?

Noz Urbina  43:02

So it also I don’t know if this is since we recorded but very, very recently, it’s hard to keep track things are happening in terms of days, instead of weeks. Since chat, so we had chat GPT. Three about 4334 months ago, I think it’s right before I think it was right before it’s October, November. Yeah. October. Yeah. It all exploded right before the holidays. And then we’ve had we’ve already been charged up for so about five months later. Yeah. There was there was the I think it was the Atlantic if I if I’m doing this, right, but somebody made a prediction. One of these kind of tech houses made a prediction that to get this chip, this chip GPT, three level, large language model down to something was less than $1,000 to create and train and run was going to take till 2035 weeks after that, after that prediction. This somebody at Stanford, a team at Stanford University, took one of metas AI models. So Facebook’s Facebook’s AI models, and what they did is they got another AI model to train it rather than having human trainers. And so and so within five weeks, they were able to get this new one that within five weeks, but five weeks after the prediction that they wouldn’t be able to do it to 2030 they got this of AI model, which wasn’t even one of the more powerful ones, it was more basic ones. And they were able to more or less benchmark against chat GPT. Three, and they’re able to do the whole project for 600. American dollars.

Yanko Ivanov  43:36

That’s incredible. Now researchers, yeah, they can do some incredible stuff.

Noz Urbina  44:18

Yeah, so this is running on a laptop at this point, you know, this is this is running on a local computer for an you know, $1,000 with a with, like, cloud computing, etc. Services. So as you were, as you were saying, having on your phone in a couple years, it’s not unreasonable. Okay.

Yanko Ivanov  45:35

Okay. It’s also a little daunting. What are we going to do in a couple of years or five years down the road? Who’s going to produce all this content? How are we going to know that this content was actually authentic? authored by authored by something, somebody rather than something? And, and

Noz Urbina  46:55

will that even be a criteria anymore? You know, you know, we had a, I talked about this on another podcast When? When electronic we went through this with electronic music, we went through it with Photoshop. Yeah. And now we’re doing it with words. You know, I remember I was doing photography since I was a kid. And there was the when Photoshop came out, that was cheating. You weren’t, you weren’t a member? I remember. Yeah. It was a real telegraphy, the real real photographers worked with chemicals and light and smells and, and film, the real photographers didn’t do digital. And then the same thing of music, you know, electronic music was this kind of like, oh, no, no, that’s not that’s not real. That’s not composing. Like, that’s not art. And the same thing, it just kind of you just need, you need to get over the cultural barrier to this feeling of Well, we were always doing it this way. This new way, a copyright. They kept their copy acceptable.

Yanko Ivanov  47:15

The the only the only difference here is that with music with photography, it’s a one specific area. For this, it has an impact on everything. Everything really can transform industries, not not just how we work, but industries. Entire economies. Yes, yes. Yes, absolutely.

Noz Urbina  48:11

I mean, so yeah. So we got we got to wrap this up. I think I think we could talk for a whole other podcast. implications. This as we said, things are moving so fast. Yeah, it’s been so great to have you Yonko. I think we’ll have to have you back before the end of the year to do like a roundup out. How did this turn out in a few months time. But thank you so much for your contributions and, and thank everybody for listening. Remember to check out the rest of the podcasts on OmnichannelX dot digital, and we’re gonna be announcing some themes. Some themes for the year are doing some different themes, like content modeling and pharmaceuticals where you can get lots of different resources, podcasts, webinars, etc. all year long. Check that out. Subscribe for our mailing list. If you liked this. Give us a like and a comment and let us know what you want to us to talk about next time we get younger on the show.

Thanks so much. Thanks, everybody.