In this episode, Noz Urbina and Jeff Eaton unravel the intricacies of content management systems, data structures, and the role of AI in today’s digital landscape. From the challenges of migrating content to the importance of shared organizational language, this discussion offers valuable insights for content professionals and tech enthusiasts alike. Discover how AI can augment creativity and problem-solving, and gain a deeper understanding of the complexities behind content management.
What you will learn:
- The varying needs of different content management systems and their overlapping domains
- The critical role of content migration and its challenges in transitioning to a headless CMS
- Why developing a shared organizational language and content structure is essential
- The distinction between AI designing content models and recognizing patterns within existing structures
- The power of AI as a creative tool to augment human creativity and generate novel ideas
- Realistic expectations for emerging technologies and their iterative development.
#OMNIXCONF IS BACK
Join us ONLINE.
Super-early bird pricing is available from now until February 28.
Jeff Eaton is a founding partner at Autogram, where he solves complex content management problems for media, education, and enterprise businesses.
He co-authored the first edition of O’Reilly Media’s Using Drupal, hosted the Insert Content Here content strategy podcast, and is a frequent writer and speaker at web, content strategy, and open source conferences.
Full session transcript
THIS IS AN AUTOMATED TRANSCRIPT
Noz Urbina 00:44
Hello, everybody. I am super excited about this episode. I am here today with Jeff Eaton. He’s a partner at Autogram. And all round one of my favourite folks in this industry. Jeff and I are renowned for going off on one when we when we get going. So this is going to be an exciting conversation. I’m sure. So let’s start. Jeff, I’m sure that most of my followers, if not all, have probably heard of you or know of you. But can you please introduce yourself describe a bit of your background? And what kind of work you do.
Jeff Eaton 01:25
Yeah. So my name is Jeff Eaton. I’m a partner at Autogram, sort of a bespoke consultancy. We do, you know, content modeling information architecture, design system work. Basically all the stuff that’s necessary when like a large to mid size org says, We should launch a new initiative, and it needs stuff, or we’re gonna migrate or we’ve acquired a company and they have all this web stuff. And we have all our web stuff, ah, et cetera, et cetera, et cetera. My background is, let’s say back, back when dinosaurs roamed the earth, and we were all very excited that Netscape had just added tables. I dove in in the night, you know, I started I was freelance writer and designer at the time, doing you know, I like wrote for Muna tech magazines and stuff like that. But I ended up starting to build out web stuff, because it was interesting, it was cool, you could publish your own thing. And very shortly, I ended up becoming the guy at the agency I worked for who did the web stuff. But then, you know, that was pretty early. But you know, over the years, I got deeper and deeper into that, you know, I started, I really dove into the backend side of things, you know, building out sites, managing servers. And over the course of that, I ended up becoming one of the contributors, one of the core contributors to the Drupal project, and ended up working in that space for probably about 10 to 15 years. Both just you know, building meat and potatoes, you know, CMS software, but then also increasingly working on planning and architecting really large scale migrations, where companies were saying, oh, let’s, you know, build a white labeled Drupal based, you know, platform for us to spin ups, you know, websites for, you know, our properties and stuff like that. ended up working with Lola bot, one of the premier Drupal agencies in that space and built out their digital strategy and content strategy practice. Over the course of probably like eight to 10 years or so. Yeah, for me, you were kind of synonymous a little about for ages. Yeah, yeah. And, you know, it’s now all of us practices headed by Greg Dunlap, who, you know, we were, we were, you know, we were, I guess the CO nerds of Content Strategy at Lola bot for many years. And he’s got a he’s got a great team going on there. But like, probably maybe about like, five, five years or so ago, I really started chewing on a couple of specific challenges that I was seeing with a lot of our a lot of our clients around the way that structured content models and design systems and CMS architectural choices, you know, beyond just the content. We’re all being like pursued and iterated on by like the technical teams, the design teams, the content teams, and good decisions were getting made, but like things were not adding up once all the pieces had had to come together. And it felt like there was a real in particular there was a real collision between the good and like cutting edge work that was going on with component Oriented Design modules in the design systems world and the increasing understanding of and use of like component oriented content In headless CMS and stuff like that, and I started talking to Karen McGrane and Ethan Marcotte Karen McGrane, the author of Content Strategy for Mobile and you know, long time, you know, I a guru, and you know that world and Ethan Marcotte who, you know, wrote the book on responsive design. And we started talking about like, all of us, we’re seeing the same kind of pattern from different angles. And in particular, around like the growing world of like high variation content, where templated stuff is easy, totally one off stuff, everybody just knows, you know, you throw a designer and developer that kind of person at that, and they make it happen. But this middle zone of like, landing pages, and customizable x and y where companies say yeah, but also we sort of want Squarespace built into our CMS, that zone was like this increasing explosion of problems, as organizations tried to manage and you know, tackle it both from a governance standpoint and ongoing maintenance standpoint, and in places where the content model had to really dovetail effectively with the design system for that kind of approach to work.
Noz Urbina 06:10
So when you say like we want to Squarespace built in. So this is we want to have bits of we want to have like islands of these quick, easy builder pages mixed with a more structured applicant more structured, but yeah,
Jeff Eaton 06:25
the idea is like, you know, editorially designed pages, essentially, where, what’s there, what kinds of elements are on the page, what kind of stuffs going on there, maybe there’s some dynamic stuff, maybe there’s some hard coded, you know, handwritten stuff, copy just for this page, maybe there’s lists of things from elsewhere getting pulled in. But it’s all being sort of conceptualized on the fly by the producer, by the page producer or the content creator, whatever role is responsible for it. And the idea is, they want to have like a CMS style builder for that kind of stuff, not a conversation with the development team and a ticket and a request to add some weight. Exactly. And you know, it’s understandable. But we found that inevitably, you know, once you get that kind of capability in there, very quickly, lots of the site ends up being built with that, because you know, everything looks like a nail once you’ve got that particular hammer.
Noz Urbina 07:30
So do a bit of unstructured, exactly. And what’s
Jeff Eaton 07:34
fascinating to me is that even your good good page builder like that, it’s still built on structured content, you know, the individual chunks of stuff that they’re putting together. It’s not like they’re just throwing HTML at the screen, like the battle days. But what actually, is that page that they’ve created conceptually, in what you’re
Noz Urbina 07:57
really, you’re really raising a big question of, oh, you use this structured, structured as structured as opposed to a wall of blathering words? There’s, and then there’s structured in the sense, it’s better chunked up, you know, they’re using better use of lists, yes, tables, etc. But then this structure is, and we actually know what we’re talking about here. Well, it’s interesting, because she’s purpose and an intention for these sections, given them a name. And then sections of similar type will be
Jeff Eaton 08:30
shared with other kinds of pages of the same style. And what we’ve found consistently is that, you know, once an organization got to, you know, hundreds 1000s, even pages that were built out using these tools, they faced exactly the same organizational problem they did in the battle days, raw HTML stuff thrown into a directory, which was free that different, we don’t know what’s there, we can’t rework the design, because all of the elements they’ve used to build things are deeply baked into their connection with the output templates in the design system modules. And if we start iterating, that all these pages break, and we aren’t sure why we aren’t sure what those pages of trying to do with them. Because what that page is at a conceptual and meaningful level just exists in the mind of the person who threw those modules together.
Noz Urbina 09:29
So I think we’re getting to the root of the difference between content that has been modeled, and the different content that has kind of been used as a bit more of a chunky or chunk oriented page builder. And they are not the same thing. Yeah. And so I’m very excited about this because I’m constantly I came from
Jeff Eaton 09:53
we call it the high variance content. You know, it’s stuff that still needs to be structured, but there’s a high degree of very variants in what’s being done with it and what form it’s putting. That’s a real problem.
Noz Urbina 10:05
Okay, that’s interesting. So for me, that’s a model that lends itself well, to adaptive or personalization. So the I think we should probably come back, that’s another
Jeff Eaton 10:17
space where the same challenge ends up coming up, even if it’s not editorially designed pages, the idea that we’re going to have to use rules and algorithms to assemble stuff runs into the same challenges very quickly.
Noz Urbina 10:29
Yeah, exactly. And so for me, when I use the term structured, I, I’m always coming back to the separation of concerns between layout and visual experience. And the content itself, which is governed by more of a meaning, meaning purpose, intent, actual, like meaningful labels, aka semantic tagging. And this is core right now that the, the hot buzzword, one of one of the hot buzzwords in our in our industry right now is headless, which supports our favorite buzzword, which, of course, is not a buzzword here at omnichannel x, which is omnichannel. So if we’re going to do these things, the separation of concerns is really important to have this content, which is structured enough that it is no longer locked into this page in incarnation and could be operated on by a program or by a filter or by something that could make it look different could be migrated to a different page could be white labeled into a microsite, or even go to a completely different format like print, PDF, print PDF,
Jeff Eaton 11:41
there may be work in the translation, but you have the capability of doing that in a regular struck in a regular and consistent fashion. But it all falling apart.
Noz Urbina 11:50
So how, okay, I got all the questions. So do you have a let’s get a walk this through for people who are getting into this content modeling thing is you’re on the content modeling experts panel. So we’re running this this year as part of our content modeling Design Series. So we’re talking a lot about content modeling these days. So do you have like you walk us through a bit of the approach? What methodologies or frameworks do you use? And and what do you do when you come into this kind of situation where they want to do all the things but they want to move to this more manageable, more consistent share kind of content as a service model.
Jeff Eaton 12:35
So you know, not to go too far into the philosophical language has been one of our core metaphors for this. Because the idea is, once you get to the point, as you do in these high variation, page billary types of scenarios, where you have lots of different component elements, and you want them to be assembled playable in lots of different ways to communicate different kinds of meaning, what you have isn’t an assembly problem, what you have is a language kind of problem, because ultimately, you are trying to communicate message, you know, that’s what content is, it’s, you know, it’s a perspective, it’s a message, it has an intended audience, it has a medium that it moves through, you know, you know, that’s that, you know, different channel conversion that you were talking about, you know, message can be converted to different mediums. But in order to really effectively tackle these kinds of, you know, high variation problems, you have to start thinking in terms of like, what’s our core like? lexicon? What are the things that we are committed kinds of things that we are trying to do as we communicate? What are the pieces that our messages get composed out of, in the same way that like, an essay, in rhetoric has meaningful, distinct sections, you know, there’s the introduction, there’s supporting statements, there’s a conclusion, there’s references and citations and stuff like that. Recipes had ingredients, steps, techniques, all that stuff that we’re familiar with. Lots of different kinds of things have that kind of structure. And when you start making things that can be assembled in all sorts of arbitrary ways, you need to think about not just what’s the list of things that we can stick in there. You also need to think about the grammar that is inherent in them, what things can start off a new piece of content, what things are only really meant to close what things go together and support each other versus you know, stand alone, how things are meant to connect to each other. to work together to compose a complete meaningful message, that’s a big part of it. And, you know,
Jeff Eaton 15:10
work standpoint, that’s not too different from the basic component inventory kind of work that you do when you’re breaking things down. But thinking about it in those terms of what are these pieces, how are they meant to go together, what rules govern
Noz Urbina 15:24
and give a can you give them like a concrete example like, okay, the recipes example, but like something that
Jeff Eaton 15:32
let’s say, let’s say a product, okay, let’s say a product page or something like that. Some something like, you can say, Oh, we’ve got case studies, we’ve got pull quotes, things like that. Those are very specific kinds of things. But you could say that they’re all or the category of social proof. There are other people have used this product and met with success. Other people have good things to say about this product, stuff like that, you might say that some sort of social proof element is a required element of a product page, it could be lots of things. And you can either open with a social proof, or you can have a social proof in the middle, you don’t close with a social proof, you always close with directive or a next step or something. Exactly a call to action, whatever you call it, whatever you name, these things, the important part is starting to develop inside of the organization, a vocabulary that actually makes sense to the different people working with it on the design side, on the engineering side on the content production side. So that when we say, Oh, we’re making a product page, what you are saying is going to be there make sense, even if there’s a high degree of variation in it, there’s an agreed upon rhythm to it, and there’s an understanding of what pieces are going to be a part of that and how they’re meant to support each other. Like you can’t create a product page, without a call to action, you can’t create a product page that doesn’t have some social proof, you can’t create a product page that doesn’t have some of these things. But it can grow stretch, it can, you know, have different rhythms inside of it, as long as it meets certain criteria. And that essentially is like a promise, if we say the product page, it has these things, maybe more variation than than a typical, strictly modeled, structured, you know, templated page, but it still has some of the benefits of like software, being able to interrogate it and say, what’s there, this block isn’t just a three up grid, it’s the social proof portion of this product page.
Noz Urbina 17:54
Okay, so that I can. Alright, so that’s, that’s is step one of coming and kind of inventorying and developing that common language of product. So how, so we’re gonna I’m thinking of projects we’re working on right now. And something that I do a lot, which is then how do you socialize that concept and the benefits of it to design teams, and tech teams who really don’t especially kind of care and haven’t been used to caring so much that they’re busy designing stuff up, and they want to make they want to make an app, they want to make a page, they want to make a thing they want to do a brochure, and you’re coming in and kind of putting this language requirements. How do you get them excited about participating
Jeff Eaton 18:41
with this, getting, getting excited about
Noz Urbina 18:44
it excited and then actually
Jeff Eaton 18:47
excited is a very lofty goal. Not angry is usually the first. I would actually say and this is actually something that Ethan Marcotte often says about design systems and what you know, Karen McGrane and I have often said about content models. Thinking about like this sort of organizational language or communication. You’re going organization already has one, it may already have several. The question is, do you have this? It’s how consistent is it? And is it a good fit? And is it working for you? And in that sense, treating it as we’re trying to smooth this out? We’re trying to, you know, we’re trying to make this work better, and more closely match what you really do not, we’re imposing this new philosophical structure in order to, you know, make it easier to roll out an Android app in a year. You know, I think, you know, I look anybody really gets excited about that kind of thing. But approaching it as you know, the degree of flexibility that you want and the can the need for reuse or You know, multichannel delivery that you have means that you’ve worked yourself into this language like problems of these different assemble modules and stuff like that. And in order to make this work for you, we want to put a little more rigor around how we think about this stuff so that everybody’s on the same page when the design team wants to iterate on how their components. Look, they’re not constantly running into deep disconnects with what people have put into them. Because there wasn’t any really meaningful connection between how it looks and what goes in there. On the engineering side, you know, it’s like, hey, we would love it, if you didn’t have curveballs thrown at you in terms of what people need, you know, new random stuff people need to do with these pieces, we would love it, if there were meaningful rules around what different kinds of pieces are meant to do and how they function with each other, that you could use as a baseline for like, oh, new functionality needs to be added? That sounds like a social proof component, not just we need to go to the whiteboard and figure out what something is from square one, or you have
Noz Urbina 21:13
you have a basic operating language to work with, exactly with your colleagues.
Jeff Eaton 21:18
And that that has been pretty effective. You know, there’s the devil is always in the details. But yeah, that has been very effective. Well, I’ve
Noz Urbina 21:25
got a specific devil to race, which is. So we’re doing this project, and we’re launching a few of them right now, where we’re, we all want to migrate to the single source of truth, put everything in, you know, this, this common language, etc. First of all, I spent the first half of my career kind of thumping my chest about that goal. And then, across the last 10 years have become more and more cognizant of, if we’re going to truly do omni channel, that’s going to involve multiple sources, like there’s going to be. And I’ve reflected on this with many of my people like, like yourselves, like I, my peers in the industry. We’re been talking about these concepts of single sourcing and single source of truth and unified formats, and in one language, etc. Nobody’s done it, like nobody’s done all the things, there’s no enterprise who has all their stuff in a common format and using one repository like, and would you even want to? Is there any repository that does all the things, you know, there’s there’s PIM systems and CRMs, and CMS, for a reason, because they’re very different. They’re specialized.
Jeff Eaton 22:35
And especially, there’s different needs for each of those different kinds of things. You know, like you said, a Product Information Management has a meaningful overlap with content management, especially when you’re talking about marketing and support content and stuff like that. And but they’re very different things, very different workflows, and data life cycles and stuff like that.
Noz Urbina 22:58
And so and then there’s the big one in between, for example, CRM, because people, a lot of people are using their CRM as a kind of a knowledge base, knowledge base is structured content.
Jeff Eaton 23:07
And then the CRM morphs into your CMS for the marketing site, and no one realized it. Size in Yes,
Noz Urbina 23:16
yes. So this is, so it’s a now a many to many relationship, where we’re using kind of this common language more deeply, possibly in like a headless CMS. But then more generally, for other systems, because if we do bring in stuff from other systems, if they have pricing information, or if they have knowledge, articles, etc. Just real real off a couple of examples that we just brought up the cup presentation, so you got a presentation library and SharePoint, you’ve got to have enough of the high level of common language, that those can fit together and be delivered out in in a unified experience for audiences. So there’s, there’s that there’s many, many thing, and even just with your headless CMS, if you’ve got this new language you define aka your content model,
Jeff Eaton 24:13
I would actually I would actually even go farther and say that this kind of language is like more of like, an institution, an institutional ontology, or like a shared language, for like messaging that the content model is a part of and connects with, but it’s not necessarily one same, especially when you were talking about like, you know, and the design system uses it and this it’s, but yeah, they are very closely connected. I think it might even know your domain model the way you know, Carrie Hain and Mike Atherton have written about it, that might be closer to that van than Joe just the content model. And it is inspired by the concept of like, so their concept of domain model comes from the world of domain driven design. For and that’s the practice that’s like the school of software architecture where the idea of building what, what it refers to as a ubiquitous language for your software is important. And that’s like, it’s it’s the intent is to make a language that subject matter experts aren’t would be capable of describing functionality and not just software engineers, and it should actually really be meaningful, and the software should reflect it. And like that idea. I think it has been a big inspiration.
Noz Urbina 25:36
Absolutely. You know, I mean, that’s, that’s, that’s definitely domain driven design has been a big thing for me as well. But so it’s it’s that that very high level language of what are the things we’re talking about? How do they relate to each other? And you’re right, there’s also like taxonomy, like, what are what are our list of products? Like? I’ve been into many, many organizations that don’t even have a definition of what a product is, you know, they have, like, a product the thing we sell, or is it still at a certain
Jeff Eaton 26:03
at a certain level, every organization is ultimately faced with? What’s the difference between a service and a product? Yeah,
Noz Urbina 26:13
exactly. Or, if this is a third party thing, we resell, is that a product? If this thing is only sellable? In conjunction with this other thing? Is it an add on? Or is it a product, like defining these things, and defining them universally is very, very valuable, powerful, but so even if an if you got into a more idealized scenario, you’re still not all in one kind of system. But I want to come back to this kind of the content that we’re working on. Let’s say you have all that other stuff worked out, you’ve got all this unstructured content or or
Jeff Eaton 26:49
were differently structured or somewhat differently?
Noz Urbina 26:52
And then you Yes, somewhat structured content. And then you have this new headless CMS, which you you want to move on to, and then you’re moving everything to that. So this is a con this, I’ve seen this tackled many, many times in my career. Oh, yeah. I’m always interested in coming back to it. And I would like to raise it on the podcast, because I think that a lot of people don’t think about it enough. And what’s happening now is I’m seeing that headless CMS vendors go full bore into this, you know, put all your content in me and everything will be great. But the getting the content and me and getting the content into structure from unstructured question is something that I I’m always looking for a fresh perspective.
Jeff Eaton 27:33
A whole lot of disposable one time migration code that somebody has to write and, yeah, you know, at the end of the day, I don’t think anyone anywhere has ever been able to write a magical, you know, you click the button and boom, your stuff migrates. It’s just, it’s one of those things where just the nature of the beast is that there are so many differences in the from and the to, from project to project that it’s we’re custom work has to be done. I think from a, you know, from a developer perspective, data and content migrations are just the bane of every project and lullaby, you know, like, many, many years ago, even you know, it was the running joke was the first meeting with a client when they were thinking about maybe doing a project and they were talking with different vendors. My first question was always, so have you started the migration yet? Because like, what we’re not even really sure what the CMS is like, you should have started the migration already. It’s because it’s also one of the places where I think it’s so easy for dragons to be buried in that, you know, for you on earth stuff that throws a nasty curveball at your well laid plans and, you know, explodes the scope of some of the development work or complicates what was otherwise a fairly polished tight content model. And there’s edge cases you have to account for that, you know, no one even remembered but gets unearthed when you know, the rubber meets the road of moving stuff from CMS a to CMS B or whatever. I think that’s just, I don’t think that will ever go away. Because solving it means solving the problem of content.
Noz Urbina 29:30
Yeah, okay. So that’s the I think that’s the I think there’s what’s the job wanted you to convey, which is do not underestimate this part, as a very significant portion. And having your new system up and running and being able to, to add content to it and deliver content to it doesn’t mean that your content, you know, that you’re an old done, migration is a massive part of these projects, and I and ideally not hearing about it in about form
Jeff Eaton 29:57
right now. Yeah, it will This has actually been something like over the past maybe half a year to a year, Otto Graham has, you know, started focusing more on not because it’s necessarily a shift for us. But because we really started realizing how it’s this, it’s the moment where a lot of organizations that may not explicitly think about developing a consistent organization wide, you know, metamodel or something like that. Well, migrating content is where they actually encounter the problem and feel the pain and say, Oh, my gosh, this is a huge pain. And this is where we keep unearthing undiscovered problems, since a great told you so, or, or even a, hey, you may not conceptualize this as an organizational ontology problem, or a content architecture problem. Your developers may have just warned you that it’s just what you got to plan for, or something like that. But you know, that spot those where we can help, this is one of the things that, you know, this is one of the key applications of it. And that’s actually been, like, it resonates deeply with, with a lot of orgs. Because everybody’s been through painful migrations, and everybody’s got horror stories. It’s one of those things like, you know, their first you know, like, your driver’s ed, you know, experiences or something like that, that people just say, Oh, I remember that. That was horrible. One of the things that I do think is interesting, and you touched on this earlier, you mentioned this idea that things live in lots of different systems. And like even going to headless CMS, you know, it doesn’t necessarily mean everything’s going to be consolidated into just that CMS. You know, ideally, the benefit of a headless content management system is that it can coexist effectively, inside of an ecosystem of lots of different parts, because it doesn’t assume it’s in control of everything. It’s fascinating to me, because there’s actually a sort of a new emerging ecosystem of products whose sole purpose is like orchestration of all different pieces, I think uniform is one of them that, you know, we’ve, you know, taken a look at it and been chatting with the folks there. The idea is another. Yeah, yeah.
Noz Urbina 32:23
Yeah. They have had people speaking in OmnichannelX. before. So this idea of the of the orchestration octopus, as I call it, which is that you have this many to many things, you have many heads, and you have many back ends. And you, you it’s the routing of all this stuff, which leads me to lead me to our special Yes, I think that’s another category of thing, which is not being discussed enough. We’re talking a lot about putting these these back end CMS isn’t this headless? CMS isn’t putting the heads on them. But it’s the routing of who got now we have all this personalization ability now, how are we gonna make things sure that things go where they go? And this takes us to where the role of data is, and all this? And the obligatory AI question of, can’t my AI just handle that for me now? Candidates structure my content for me? Can’t like, move stuff around for me?
Jeff Eaton 33:18
It’ll certainly tell you it can.
Noz Urbina 33:23
Jeff Eaton 33:25
I mean, with a smile. Yes. Yeah. I mean, that’s a broad question I like. So I’ll do my obligatory, you know, definite defining some terms real quick and say that, I think AI and machine learning and automation tooling, like as a category of stuff can definitely be tremendously valuable. But more recently, when people say, Can ai do this, what they usually mean is, Can generative, like large language model based tools like Chet GPT, or dolly or reading journey, or, you know, something custom based on, you know, Facebook’s, you know, Elon engineer or something like that? Can these large language models that seem to be capable of magic? Yeah, it does, you know, it’s responding to a human is responding to natural language instructions with natural language output or even image output or anything that we can, you know, anything that we can translate into some form, you know, seems like they can, can do this stuff for me. And I would, I would, I generally urge a great deal of caution. When people are asking if a model like that can structure thing For me, or if it can make sense of things for me, because by definition, large language models and use generative AI tools are not making sense of things. They are making patterns of things, which is sometimes the same. But we don’t really have control over what kinds of patterns it’s recognizing it may be it, you know, essentially it’s saying, if someone were to say what I just said, What would probably follow it? Oh, yes. You know, it’s, you know, and it makes sense, because we’re used to conversing like that. But from meaning standpoint, like, is this the right structure for what we’re trying to accomplish? You’re not getting analysis of that question. You’re getting output that would usually follow someone saying something like this. So it’s, it’s one of those places where the natural language aspect of it can be incredibly compelling. Because you ask it like, hey, write me, write me a poem about peanut butter? Or write me a product description for a new car? Or what kinds of questions should I ask a movie director about their new film, and it can generate those things, because those are kinds of patterns that occur a lot in the kinds of conversations that we have on the internet and in training data and stuff like that. But once we start moving to things like understanding of structures and planning of structures and analysis, it it’s important to recognize that those models aren’t making that jump, necessarily, they’re still talking about what words usually follow what words not, what concepts are people trying to iron out. So it’s very easy to get stuff that is understandable and reads well out of it. But it hasn’t necessarily done any of the thinking or the analysis to do things like, say, Okay, we’ve got four CMS is and we’re going to be doing this with this one, and this with that one. And we need to be able to make sure that these two pieces work together effectively.
Noz Urbina 37:30
So there’s, there’s a difference between the AI designing my model for me? Yes. And the AI, kind of recognizing this looks like it probably goes here in the model.
Jeff Eaton 37:44
This looks like x Yes, yeah. And that’s where we start getting into areas where those kinds of tools can be very effective when you have a meaningful like structure and understanding of what’s going on and what needs to be there. And the AI is trained well enough to follow the patterns that you have developed and to work with them, rather than just to come up with its own thing. And that kind of stuff can be very effective. Even outside of the realm of large language models. There’s lots of interesting, absolutely leverage that stuff. In very useful ways. This is actually one of the reasons why that like developing your organizational language and figuring out what the grammar of your communication with your clients and stuff like that looks like it that can be tremendously effective for building the foundation that you will need to be able to leverage these kinds of, you know, automation and intelligence tools, because you are essentially doing the really hard part for it when you think through that stuff.
Noz Urbina 38:55
LGBT is really a done, I call it the apple effect. Which is that by getting so much excitement about their product, they’re kind of dumbing down the whole market. In the sense that the whole discussion about AI becomes, as you said about that right of AI, when something like auto tagging or auto categorization or national processing, which is older and more proven, and potentially quite more useful. People are kind of gravitating towards these new tools. The reason I called the apple effect is that I remember when, alright, it’s still happening when metaverse has kind of been a little bit sidetracked right now by AI. But when we’re talking about metaverse, products or any kind of augmented reality products, there was this. This kind of literally saw reviews of people who were who got these beta testing kits and going, you know, well it isn’t as simple as my iPhone. And I was just going when did that happen, when did early adopter tech get compared to household commodities. And it just it actually, for me, it inhibits the growth of the tech. Because everything has got to be like iPhone easy. It all or so there’s no time for things to develop, like, the first first try is never going to be so refined.
Jeff Eaton 40:25
I am a compulsive hoarder of information, you know, I’ve got like a gun, I’ve got 28 terabyte network storage system in my house that I just use as my black hole for stuff that I come across. That’s interesting, which turns into my own knowledge management channel, which is a subject for probably a whole different podcast for a whole different audience. But Jeff organizes harddrive poorly. But one of the things that I tend to do is whenever there’s interesting tech stuff, on the research side on the, this isn’t a product yet, but someone just did something very interesting. I tend to grab all the information I can about it and dump it into a folder and see that’ll be interesting later.
Noz Urbina 41:14
I love that I found really good. I’m gonna try to do I found
Jeff Eaton 41:18
video clips from around 2005 2006 Yeah, 2005 or so. This was, you know, again, before the iPhone came out. And not not too long. But you know, before before, it was broadly, you know, broadly accepted as the way you’re supposed to do consumer technology stuff. And it was basically, university students who’d put together a multi touch interface for manipulating, you know, objects and information on a computer screen, not just doing I mean, I I grew up when touchscreens, were something that were for museum kiosks, you know, you had these big chunky buttons that you could press, and they would initiate an action, but like the the understanding was they weren’t great for doing anything complicated. ATM machines. Yeah, exactly. And, you know, there were lots of us, that’s bank machines for our European friends. Yes, yeah. And there were lots of iterative improvements that needed to happen to make that better, you know, resolution, using capacitive touch, rather than actually detecting pressure of you know, forcing down on the screen. But also, and I think this is underestimated. The concept of multi input touch where you know, two fingers means you can rotate things, and then three fingers means you can Yeah, exactly. That as a mode, a mode of interaction, I think is just as revolutionary as like the mouse as a mode of interaction instead of arrow keys. And, you know, the iPhone, in a lot of ways was built on the backs of lots of different parallel improvements. And it’s very difficult for technologies that are where all of those pieces haven’t yet come together. But the work is still continuing, when they’re judged by the standards of a product space where the critical mass of different pieces have come together. And a company like Apple has come in and produced a highly polished result. On top of that, sort of Yeah, exactly on features. And we don’t know what that that magical critical mass of improvements is until it happens and already goes, Ah, of course,
Noz Urbina 43:49
coming back to AI, yes, it definitely hasn’t happened with LGBT yet. I think
Jeff Eaton 43:55
for certain kinds of tasks. I think it happens. Like it creative stuff, like, I think Brian Eno, the musician did, like back in the 70s, released a product called the Oblique Strategies. And it was basically just a deck of cards with phrases on it, like, try subtracting something, or you like different things that were just very general, but meaningful in a lot of creative and ideated contexts. And the idea was, if you’re blocked, you just pull out this deck and you say, Okay, what’s in my Oblique Strategies deck, you know, it’s like, okay, let’s just try subtracting some elements of this thing that we’re working on, and what changes what what, how does that alter things? It doesn’t necessarily solve your problem, but it can help you get rolling and get out of that locked state where you’re sort of sunk in the details of everything that’s happened. Yeah, and I
Noz Urbina 45:00
think the roll of a dice aspect or not not roll of the dice aspect, but the
Jeff Eaton 45:04
it’s more directed than pure randomness. Yeah. Like it has that outsider.
Noz Urbina 45:09
Yeah, outside that. So it’s the it’s a fully unlimited thinking. It only limited by kind of language and a general attachment to statistics. But yes, that it can come up with stuff, which will come out of left field for you. And it’s hard, it’s almost anything you ask it to, it often comes up with a couple extra ideas, which
Jeff Eaton 45:29
is huge. Yes. And that, I think, is one of the places where it really excels, just because of that aspect of it. Like, you know, it, I don’t think that’s anything inherently magical about large language models. It’s like, you know, a 72 card deck by Brian Eno from the 70s, you know, was able to accomplish aspects of that. And I think that’s the role that a lot of people are finding tremendously valuable for MLMs. Now, it can go through and, you know, fill out the details of, you know, some of those speculative, you know, suggestions in a way that a deck of cards obviously can’t. But I think that’s one of the places that people really feel like it’s a fundamentally creative tool, where it can help them with that kind of stuff. And I think that’s, that’s very interesting. And that’s an aspect of it, that I don’t think a lot of other tools like machine language categorization, stuff like that, that we’ve worked with in our space previously. I don’t think they’ve really brought that to it. And I think that’s interesting. It’s very interesting. And
Noz Urbina 46:41
so when I say it hasn’t, I found what the what I talked about it, I find it, it’s it, it is, it is a platform product. It’s a product on which other experiences can be built, and other things can be connected. Whereas the, for example, chat up tea, is it’s got certain things, it’s quite good at a lot of things it’s not very good at and a lot of things, it’s sort of gives the impression, it could be good, but it doesn’t. So we come back to the problem of it being very good with language, yes, can bluff very well. And know that it’s bluffing. It’s just brilliant stuff. So the example I think it was sort of a TED talk recently was I think this is a helpful example for helping people understand what it is and is not actually doing what it seems like it understands what you’re saying. It will to one is the bicycle example is if you ask Chechi BTW, I don’t know if they fix this. But if you charge until very recently, you asked Chet GPT, if you wrote a bicycle, over a bridge, and under the bridge was lots of broken glass and nails, what would happen to the wheels of your bicycle? And it would say, you know, your bicycle tires would probably pop. Because it doesn’t actually understand
Jeff Eaton 48:00
is making a model of the problem. It isn’t like there is no internal mental model of the concept you’re making. What it’s doing is saying, oh, when have people said things like this, what usually follows this? And usually, that’s not the kind of word problem that someone poses, and also inserts a caveat that essentially renders it moot.
Noz Urbina 48:24
Yeah, so it can’t be the idea of riding a bike over a bridge, and riding a bike over broken glass and nails. It doesn’t actually understand.
Jeff Eaton 48:35
Right? That’s, and that’s inherent in the nature of how a large language model works. It understands semantic connections, like it can do things like I mean, and this is where I think grabs people with like this almost magical quality, because of the way that language works and how words get used in connection to each other. As like at a nitty gritty detail level. You can do things like say, Okay, I want the word adding, subtracting the king, like, you know, a monarch or whatever, subtract the word man from King and you get the word queen. Hmm. Or add the word woman to King and you get queen, or add sun and you get prints that feels genuinely magical. From, you know, from someone who’s looking at it, and it implies a kind of understanding, it looks like understanding what we’re used to mapping that to intelligence, and it’s really spectacular. And I’m not minimizing it like that is a legitimate advancement of the field. However, it does not Understand the difference between a constitutional monarchy and a direct democracy? Like those broader conceptual model? Yes. All right, what it’s making, it’s detecting connections in words and sometimes phrases, and how they get used with each other, and how one word is often not present when another one is, it’s able to take huge bodies of statistical data about how people talk and use language and come up with really fascinating stuff like that, and really amazing stuff like that. But it is not building what we would think of as a mental model for X, based on those kinds of prompts. And I think that’s the most important thing for people to remember when they’re talking about using it to solve problems and stuff like that. It’s it’s very easy to get sucked into that.
Noz Urbina 50:57
So that my other example is I think, goes in a different direction away from language and concepts. And is a sample chat GPT can’t do math.
Jeff Eaton 51:09
If you can, if the question is common enough,
Noz Urbina 51:13
oh, yeah, you ask too much, too. Sure. If it’s looked at times tables, enough, it can do that. But like all the math isn’t out on the internet, because that would be all of the atoms in the universe. So if you ask it to do questions that it hasn’t seen before, or when you ask it to do for example, right? Write a paragraph of a certain length. We can’t do it. Jordan Peterson, you know, Canada’s most infamous man, Jordan Peterson, the, I’m not gonna even go into that you can Google them yourself. But he asked it, for example, saying I think it was tantamount to say nice things about Donald Trump and say nice things about Joe Biden. And the length, the answers were different lengths. The Joe Biden one was longer, which he took as a sign of left wing bias. And then he says do it again, but make them the same length. And it’s couldn’t, because it just doesn’t, it’s not right. It’s like, do that it hasn’t even understood the question.
Jeff Eaton 52:15
I did the same thing about apples. Yeah. Like you don’t, you know, it’s like, its inability to follow precise instructions for generating variations isn’t a sign of map bias. It’s a sign that like, you are bumping up into the edges of what it is conceptually built to be able to do.
Noz Urbina 52:40
Yeah, it’s your I think, I think that freaks people out as a computer that can’t count.
Jeff Eaton 52:47
Of all of the things that we’ve always said computers could do counting is one of them.
Noz Urbina 52:51
Yes, there’s always like, it’s always just a shiny calculator with bells and whistles. And this is a computer that this is an example that I can’t calculate, because it’s a language model, it doesn’t have a mathematical model, know how to access a calculator.
Jeff Eaton 53:05
And that’s actually like some of the really advanced applications of some of these large language models, the way they work is actually adding essentially recognizers on top of it to say, Okay, once you’ve done your large language, model work and broken things down into, you know, these different kinds of, you know, symbols and tokens and stuff like that, I’m going to put together some tools for recognizing when certain kinds of well understood problems are present in this natural language, they like math or something like that, or chemistry questions. And what I’ve done is I’m not bothering with all this linguistic stuff, but I’ve got a very solid mathematical, you know, equation parsing engine that can do math problems. So I will take over for this little chunk of, you know, the response, where I can see that a math question is happening. And this is a great example of how there are actually lots of different approaches to AI and machine learning, Wolfram Research, that I think, you know, people have seen Wolfram Alpha is one of the services out there, and it can answer questions like, you know, what are your odds of dying, but in pink would attack in, you know, Antarctica? And it actually, you know, shows its work in terms of, okay, here’s statistics, I found use research I found that matches these things. I have a model of how to convert between different types of measurements. I know about statistics, not because I’ve read all of the internet, but because I was built with a model of statistical analysis. And it can do things like say, Well, I don’t think there’s actually enough information to answer that question. I can’t fill in the gaps in what I would say is necessary to calculate that or if you ask a different question that may say, oh, yeah, okay, I can put together a rough estimate of x I can’t write a poem about that. It can’t wax poetic about it for two paragraphs or write in the style of William Shakespeare, exactly it. But it’s designed to be able to reason through, you know, it’s designed to be able to walk through those structured meaningful problems. And there is no h SgPT plugin that allows Wolfram Alpha to intercept certain kinds of sub elements of those things. And like do spot answers inside of a larger language tool. And a lot of the really interesting stuff, that’s the shape it takes of mapping specialized models and specialized rule systems, when the problems can be recognized. But then those things that’s what everybody’s that’s what people have been doing for years of, you know, building meaningful human understood rules for reasoning through certain kinds of known problems.
Noz Urbina 55:55
So let’s tie these two threads together. So if Can we come back to our can’t I do this for me thing from before? Can we say that if your if your content set is big enough, and your project is big enough, that actually training an AI and setting up some of these recognizers might be worth the squeeze? Yeah, so I think that that’s that’s stuff i Today’s plus there are off the shelf. They need tuning also like any, like a search engine, but categorizes and taggers, yes, we do a good job for you as well. So that what’s been around for five years in the world of poolparty. And such tools, plus what we now have with Chad GPT, and the ability to extend it through through custom code, we’re getting to a point where we could get closer and we could do a lot of pre work. But your your migration from unstructured content, unstructured content, and your headless CMS loading is not going to be trivial. Like these are, these are large projects. And there there’s I wish, I don’t know if you’re going to hesitate to pull a statistic from your nether regions. But I’m trying to make one up. Yeah, exactly. The How am I like percentage wise? What is the ratio between the effort of setting up and modeling your CMS versus actually migrating your old your old content into it? Which is not Not? Not? Obviously, all the time, because everyone’s going to have a different amount of legacy content and their target models be more more or less complicated? But you know, Could you could you hazard a ratio as a as a rough rule of thumb.
Jeff Eaton 57:37
So in the classic consultant mode, I’m actually going to hedge and say it depends. I would say that you have to decide where the line between migration and modeling is going to be drawn. to even do that kind of a number because and I’ve talked about this before, I think I even talked about it in an omnichannel X event where I did a talk on it. But modeling is just the process of describing like the ideal platonic form of your content concepts. That’s an aspect of it. And that’s an element of it. But that’s the kind of stuff that also like, never quite survives contact with all of the real content. And that’s where migration, throws curveballs. And it’s, you know, it is the history of all of the real stuff that you’ve actually produced over time. And in that sense, the reason that starting migration early, and treating it as a meaningful part of discovery, work and scoping work, not just a task that needs to be done. The reason that’s important is because all of those curveballs and all of those edge cases are contained in it. It is what you have actually done in the past. And it informs the modeling work in help in letting you know what edge cases you need to account for where your model, as it stands in the Platonic form is insufficient to effectively represent the full breadth of what you’re going to be doing with it. It informs that modeling effort. So I would say that if you don’t, if you treat migration as an informative part of that, and part of that investigation, it goes so far as saying like 5050, maybe maybe more.
Noz Urbina 59:42
That’s the number I was going to pull out of mine, as well,
Jeff Eaton 59:44
like, but if you treat the migration as like, the mechanical process of writing the code that moves things from point to point and translates HTML to, you know, to rich text or whatever stuff needs to happen. I See that number goes down considerably, just because that’s complex and difficult development work. But it’s very rare that like you run into it, if you actually understand all the stuff that’s there, you’re not having curveballs thrown at you, where you say, Okay, we gotta go find out a new way to do this, or, Oh, we’re putting
Noz Urbina 1:00:23
it into shape. And we got the same distinction between when you’re talking about any complex solution development, how much of it is understanding the problem and how much is is billed? Again, if you’ve properly understood that the problem, and if it’s a difficult problem, that will take you much more work than actually when you get to the point of going, Okay, now we’ve solved the problem problem. In theory, we just need coders to code the
Jeff Eaton 1:00:48
things. And sometimes, you know, it also means saying, Okay, and here’s the criteria by which we’re going to just treat stuff as a legit special case and say, ours are really carefully modeled system will not directly account for this, we’ve just got like our sin eater content type whose job is to hold all the ugly stuff and you know, we’ll you know, figuring out, how far do you do you have to go? Yeah, the rigidly modelled stuff. Before you just say, Well, that’s an exception,
Noz Urbina 1:01:21
I was just gonna say that you if it sounds like you take a slightly different approach, because maybe you don’t, what we do often is that we have our new shiny model, which is as if we wanted, and then we have our, I love the term sin eater, we have our we have our kind of our BLOB type, this is like, the stuff goes here, the generic page Exactly. So but if we, if we gotta get a move on, and management says you need, you need to launch by this date, and have the stuff in by this date, or like, Alright, these two objectives are at odds. So we’re going to have to have like, we’re gonna, we’re gonna have to do a dump, so that we can decommission the old system, drop it all in there. But then it doesn’t go in our shiny new model. And I was kind of getting from what you were saying that you’re, you’re sometimes in the new model, you’re making more allowances to get some of that stuff over faster, which it’s, you know, could be, we’ve actually had three stages before where we had totally just dumped in, kind of better, and then the shiny new model, and that we actually kept track of them the CMS to make sure that they weren’t mixed up doing something
Jeff Eaton 1:02:35
very similar. I think one of the challenges is like, anytime you have those sort of pressure release valves, not abusing them. Yeah, it requires ongoing monitoring, to make sure that they don’t, you know, while you’re not looking, they turn into 30% of the website is your special case, Rando blob content type that anything can be dumped into. I mean, ironically, like that’s where the current generation of page builder tools, you know, of like, oh, I’ll just throw some blocks together and make a special page 90% of them started as that, oh, we just need some to do some of these real special weird pages. And two years later, the companies like how 75% of our website is page pages, exactly what everybody wants, but we have no idea what they are, it’s like you have to keep an eye on that. And recognize, like, if those weird, uncategorizable edge cases are really growing to become that much of your model, it’s a sign that your model is insufficient for the use cases that are actually being played out, or the education and the you know, organization, as it you know, the government, there’s a governance problem, because people either aren’t aware of or aren’t being given the tools to effectively use the new model or whatever. You know, there’s a bunch of different you know, or there are legitimate new patterns of usage emerging, that you are going to need to expand the model to, you know, to accommodate. So in essence, it can be useful to have those pressure release valves, but you have to treat it as part of an ongoing evolution of your model, not just where the bad stuff goes.
Noz Urbina 1:04:25
So we’ve done some things, some things as simple as it’s in there. The developers can put out to displays on whatever the channel needs to be. But no one can say File, New Page Page. If you say if you create a new X, it’s got to be of a type from the model. You can’t go in and you can’t create new content in this blob kind of squishy form. Yeah, yeah. Okay, so we are way over way over this. It’s a double episode. It’s two in one on AI and migration and back again. So this has been awesome. So Jeff, I really got to wrap it up there. I love talking to you. I think it could talk to you indefinitely. Always a pleasure. Always great. I see I’ve been on your podcast like, what, two or three times? And I think this is the first time. You’ve done webinars with us. And yeah, I was talking on the podcast. Yeah. All right. Well, we gotta we gotta do it again sometime. Absolutely. Awesome. So thank you so much. And thank you all for joining. So remember that this is part of our designing content for headless omni channel and personalization session series on OmnichannelX.digital. You can check that out, wherever our event lists are showing up. And this will be on the on demand section, you can get it and also check out all the panel discussions that Jeff was part of with Kerry Hain myself, making Casey crew Saunders and other great podcasts episodes and stuff there. It’s a cornucopia of content modeling, and content design goodness. So again, thank you very much, Jeff, and we’ll see you on. We’ve got a webinar, depending on when you’re hearing this. Oh, we’re gonna be recording the next panel discussion in in a few weeks. I’m looking forward to it. Fantastic. Cheers, everybody.