Talking Context on the iBuildWithAI Podcast
Joined Marcelo Lewin for a conversation about context-engineering, agent skills, AI for power users, and more ...
Ready to learn how to think about agents?
👉 Join me for Mastering Agent Skills — a hands-on workshop to get you up and running building your own skills library.
Context is what makes knowledge work valuable: Knowledge workers carry rich context in their heads—history, decisions, preferences, and understanding of why things are done a certain way. AI has general knowledge but lacks this specific context, so making it explicit is essential.
Context engineering means making the implicit explicit: Rather than keeping knowledge in people’s heads or scattered discussions, context engineering is the practice of curating, maintaining, and managing information so AI can access and use it effectively.
Tools give AI memory and the ability to act: When models can read and write files or interact with external systems, they gain the ability to maintain their own context over time and participate in their own learning.
Agent Skills are the emerging standard for dynamic context: Instead of loading everything at once, skills let you label and describe bundles of context that the model can fetch when needed. This keeps things manageable and relevant.
Non-engineers often benefit most from AI tools: Power users and knowledge workers who lack coding experience can achieve remarkable things with AI. The main gap is knowing what’s possible, not writing code.
The challenge is conceptual, not technical: Learning to think like an agent—understanding what information would help a model that has no prior knowledge of your situation—takes practice, but anyone can develop this skill.
Communicate intent, not exact steps: Just as good managers tell people what needs to be done rather than dictating every action, effective context engineering shares goals and constraints while leaving room for the model to be creative.
Use context for things you do repeatedly: If you only need something once, a prompt is fine. But for recurring tasks or knowledge you’ll reference often, that belongs in persistent context.
Organisations should have a Context Owner: Someone accountable for preparing and maintaining context helps teams adopt AI faster and ensures the systems everyone uses have proper, well-curated information.
The future is more dynamic context: As agents become more capable, context engineering is shifting from carefully constructed retrieval systems toward giving agents pointers and letting them fetch what they need on demand.
Marcelo: Welcome to another episode of the I Build With AI podcast, where I have conversations with humans who build apps with AI. I’m your host, Marcelo Lewin.
In today’s episode, I’m chatting with Eleanor Berger, AI leadership expert and founder at Agentic Ventures, all about Context Engineering for knowledge workers. But before we get started, you can help us grow the podcast by reviewing it, following it, and sharing it.
Marcelo: Eleanor, welcome to the podcast. Glad to have you here.
Eleanor: Hey Marcelo, thanks for having me.
Marcelo: It’s great to have you here. You wrote a blog article, and that’s where I actually saw your article on context engineering. It was a while ago, it was on LinkedIn, and I reached out to you and I’m like, “You’d be a perfect guest for that.” So, thank you for being here and for agreeing to be on the podcast. I’m very happy about that. Why don’t we start with your background? Tell us a little bit about your background, how you got into AI, how you got into vibe coding.
Eleanor: Sure. So I’m a software engineer by trade. I’ve done this for many, many years, and eventually moved more to also leading engineering organisations. I’ve worked in quite a few startups and later on at Google and at Microsoft. I got interested in AI... well, I guess I got interested in AI as a child reading science fiction books, but it was unattainable. But then, about ten years ago, a bit more, when deep learning started showing so much promise, I realised I have to get into it. So I learned deep learning and machine learning and got more and more into AI, and it became a bigger part of my career.
And it kind of took a boost all of a sudden, you know, when generative AI became so powerful, because for the first time, there was a lot of work with customers, with internal products and so on. And so it kind of became the main thing I do. Yeah, and now it’s I guess kind of very central to everyone. So I was a bit lucky, in the right place at the right time, sort of coming purely from a fascination with AI.
Marcelo: Most definitely. I mean, AI is definitely taking over. Yesterday I had a great conversation—well, this would be two weeks ago when I publish this podcast—but with Google VP of AI solutions there, and they’re embedding AI pretty much everywhere in every app, right? We were talking about what a great time it is. Scary for some people, totally understand that, but also what an incredible time to be alive and going through this revolution and be able to not only participate in it, but also to influence it.
Eleanor: Yeah, it’s absolutely crazy right now. I mean, in the most positive way I can say that.
Marcelo: Right. Yeah. Now, how did you get into vibe coding?
Eleanor: Obviously being into AI, I was interested in how it can be used in software development very early on. I did a lot of different experiments. The models kept improving. So initially, there was a lot of orchestration trying to get something to work. But then last year really, we started to see really good models that can do proper Agentic software engineering or coding or vibe coding, whatever you call it.
That became... at this point, I kind of moved... I used to work at Microsoft, but I left there and started a consulting practice. And initially I was helping people with AI engineering, integrating AI into their systems. But increasingly people just wanted to hear about AI in software development. And so it was really in response to demand. And of course, in my own work, it became a more and more important thing.
And so eventually, after giving the same advice to people again and again, I, together with a friend, started a course where we teach about AI-assisted software development. We’re now having the second cohort of that course, and it’s going great. Obviously, that’s very central now. Everyone’s interested in that. It’s nice to be able to help people do this transition, move into this world of Agentic coding.
Marcelo: Definitely. So in that course that you’re teaching, are you finding that it’s mostly software developers attending it? Because my audience, right, are technical but non-software developer people. So knowledge workers, domain experts, business professionals, that are starting to vibe code and create and build products or tools or workflows using agents. What kind of audience are you finding in your courses attending that?
Eleanor: You know, that’s really interesting because when we started, my assumption would be it would be best to teach software developers. They’ll make the most use, they’ll have the most benefit out of adopting these tools. And definitely that has been the case with some people, where they’re kind of integrating AI engineering into their existing practice. But what I realised is the most benefit actually accrues to people who are not coming from software engineering. People who are just kind of, let’s call them power users. You know, they just have this passion and a little bit of chutzpah for building something, and they realise the tools are now available. They don’t have to know how to code necessarily. They don’t need to have a lot of engineering experience.
And it’s been incredibly gratifying to support these people on their journey and see amazing things that they are able to do. And in fact, it influenced a lot my own practice and my own career because I decided to shift a little bit and focus more on supporting power users, realising how powerful it is. So this is something that surprised me and is absolutely delightful.
Marcelo: Well, it’s funny you say that because my original site was icode with ai and I rebranded it to ibuildwith.ai because my focus became these power users, domain experts, knowledge workers that are not developers. Because I personally feel that is the future for vibe coding, right? It’s going to be a huge audience where they’re going to be building a lot. And we want to make sure that what they build is done properly. Even though they won’t hand code, they still need to be able to build something that can go to production.
Eleanor: Yeah. And I think in many cases, the biggest difference between people who come from software development and ones who are picking up these tools maybe with less experience, is knowing what’s possible. Because you don’t really need to write the code. And increasingly the models are so good, they’ll do a very decent job. So initially, maybe like a year ago, I would tell people, “Hey, don’t release anything that you wrote like this with AI because maybe it’s not really in the quality required.” I don’t think that’s a problem anymore. Actually, that’s okay now. But often when you work with people who are these kind of power users or, as you call them, knowledge workers, they know what they’d like to achieve, but they don’t know what tools are available or what techniques are available. And this is where it’s possible to help them a lot.
Marcelo: Yeah, completely. Well, and now people are probably wondering, well, what does all this have to do with context engineering, which is what this podcast is about, right? But it’s actually everything to do with context engineering. So why don’t we start with just defining context engineering? Define what it is. How is it different also from prompt engineering? Because everybody, for the beginning of 2025, we’re talking about prompt engineering. Then in the middle of 2025 to the end, it shifted to context engineering. So let’s define it and talk about the differences between that and prompt engineering.
Eleanor: Yeah, I mean, I think the “engineering” word there is a little bit maybe an exaggeration in the context of people who are not actually doing engineering work, and it can be scary to people. So I’d encourage people to ignore it and just think about Context and the centrality of context to everything we do.
You’re talking about knowledge workers. What does it even mean to be a knowledge worker, right? It means being someone who has a lot of context. That’s what everyone does. Go to any office building or any Zoom meeting anywhere, and you’ll find people with very rich context. They understand the history of what they’re doing, why things are done the way they’re done, to what end, what’s important about it. All the little details, decisions that have been made, little opinions and tastes and choices. This is what they do.
Now, to work with AI, to get the benefits from AI, AI needs to have this context. It is sort of implicit, it’s in the air when we work with people. But when you first encounter an AI model, an LLM, it doesn’t have all this context. It has general knowledge, right? It knows everything on the internet. But it doesn’t have the context that you have. And so context engineering or context ownership or context setting, whatever you want to call it, is the practice of taking this context that’s until now mostly in people’s head—even teams and companies that are relatively good at documenting usually they don’t have enough rich context in a way that AI can read—and making this context explicit. Curating it, maintaining it, managing it.
And when you have really good context, AI models can do amazing things. They’re so powerful. But without context, they’re kind of... they’re bland. They have only general knowledge. They don’t understand what’s going on. And so this is central. And I think at every level, from engineering large systems to, like you say, a knowledge worker who is just sort of trying to integrate AI into their own practice or into their team’s work, being able to set context, to maintain it, to evolve it is central. It’s so important.
Marcelo: And it’s interesting you said that, to maintain it and evolve it. Because prior to AI, documentation—context—was second thought. It was post, right? The designing and the building of the application. Now it has become mainly... basically the source code. And the output seems to be sort of the byproduct, the binary building of that source code, right?
Eleanor: Yeah, because I think for most people—there are some exceptions, they’re quite rare—but for most teams, for most people, the context wasn’t in documents. It was in their heads. It was in memories. It was in some discussion they had in the cafeteria the other day. It was in some organisational cultural history. It wasn’t actually written down. The documentation, like you said, was an afterthought. And so it is very different. AI doesn’t have these memories. It’s kind of like a fixed blob. Anything you don’t provide in the form of text doesn’t exist.
Marcelo: Now, context also includes tools. So maybe you can explain a little bit about what does that mean when it comes to context, these tools?
Eleanor: Yeah. Everything that the model has access to that isn’t already in the model, which is fixed. So models have become very good at calling tools, which really means having some interaction with the outside world, except for in chat. Anything from reading a file on the file system or in your Google Drive or SharePoint or whatever it may be, to writing a file. Think about it: a model that is hooked up to a tool that allows it to read files and write files, all of a sudden has memory. That’s very powerful because it can itself maintain this context over time.
It can take actions. Maybe it’s a tool that can virtually click a website and do something that previously a person would have to do and maybe feedback to the model. And so the ability to interact with the environment gives the model the ability to participate in its own context engineering, if you will.
Marcelo: So these tools can then, like we do as humans, right? We don’t need all the context for every project all the time. We need certain things throughout the phases of the project. That’s where sort of like dynamic context comes in, right? Depending on which phase of the project you’re at, you may need a different piece of information. Maybe we can talk a little bit about dynamic context engineering or... we can just say dynamic context building. Or I think it’s also a term used is progressive, right?
Eleanor: Progressive, yeah. So probably the best thing to look at right now is Agent Skills, because they’re the emerging standards. They’re not the only option, but they’ve become the emerging standard. Again, if we want to work a little bit by analogy, you don’t have everything... no one has everything in their head all the time. We have maybe books on a shelf or folders with some documentation we wrote. And we know that it’s there. We know that we might need it one day and where to get it from.
We can have the same with AI with skills, which is just this very simple format. It’s done by giving like a label and a description for every skill, which is a bundle of context, and telling the model: Here, you have all these little different bits of knowledge, of skills. Go and get them if and when you need them. Not all the time, because you can’t have it all at once loaded.
And there are of course other ways. So the model could make a call to a website and do a Google search and read back the information when it needs. These are all options for getting context dynamically. Directing that in itself requires some work. Context engineering or context ownership or whatever you want to call it. Because we don’t want a model that sort of has infinite access to everything, including some things that could be dangerous, right? The internet can be a dangerous place sometimes. Or some things that can be confusing. There’s contradictions, things that are not relevant. So you want to direct also the process of getting context dynamically. But when you do that, you all of a sudden have this much richer palette you can work with.
Marcelo: Definitely. Where do you think non-engineers, non-developers—like knowledge workers, like I’m putting them all into one bucket—but non-engineers would struggle with building proper context? Because the key is building the proper context, the proper amount, at the proper time.
Eleanor: I think, and I also know from my experience now working with many people and teams, the struggle is more conceptual than technical. Because technically it’s all very easy because all this context is always text, right? And we all understand text, it’s very natural for us.
Where people struggle is thinking... let’s say, thinking like an agent. Trying to look at the world through the eyes of an agent that has general knowledge but no specific knowledge, no context. And understanding what would help this model. And it requires sometimes a bit of a mindset shift. But primarily people get it through practice. So when I work with people and they start, for example, building Agent Skills, initially it’s very confusing: How much should I put in it? How little? Should it be more specific? Less specific? Do I need to categorise it in some way? Is it okay to just dump some knowledge in there?
And by the time they iterate on the same thing, you know, seven times, ten times, twenty times, you get a feel for it. And I think that’s the best way to learn it. And anyone can learn it. You don’t need to be an engineer. You don’t need to be, I don’t know, a PhD in AI. All you need is practice because it feels a little bit different than interacting with people, teaching a colleague or something like this. But it’s not radically different. It’s a little bit different. It’s nuanced.
Marcelo: To me, it seems like we do context engineering every day at work in our projects, right? When we’re dealing with colleagues, we don’t give a person everything that we know. We give them the right information at the right time for the particular task we’re trying to accomplish together. And then move on to a different colleague and do something a bit different. That in itself is context engineering. Would you agree?
Eleanor: Yeah, absolutely. And we will have what is called, I guess in psychology, a Theory of Mind. We’ll make some assumptions, usually correct because we understand people well, what’s going to be beneficial for them? What it’s like to be that other person? And getting just enough information, not too little, but also not too much. And we can learn this just as we did working with colleagues in the office. We can learn to do the same with AI. It just takes a bit of practice.
Marcelo: So what type of work do you feel is most beneficial for context engineering versus just a prompt? And maybe you can give an example. I don’t want to put you on the spot, but, you know, if you can give an example of, “Oh, for this, a prompt is good enough, but for this portion, it would be more context engineering.” Provide an example of what you mean by that.
Eleanor: I think the best distinction is something you do once or something you do multiple times. Or something you need to know only once versus something you’ll need to know multiple times. So if you only need to do something once or you only need to share some knowledge with the model once, yeah, you just type it or or paste it or whatever, and it’s a prompt.
But if you want to teach the model to do something... if you want, I don’t know, there’s some workflow you do. Every day I have to write a summary email for my department. And it’s always the same. I don’t want to have to prompt it every day. It doesn’t make sense. I’m not really solving anything for myself, and I’m not making the AI work more efficiently if I have to prompt it every day with the exact structure and the information that needs to be included. That should be in the context.
Marcelo: So it’s almost like building templates of context that can be reused. I mean, that’s where skills come in, right?
Eleanor: I think templates is one aspect of it. So there are different kinds of skills, different kinds of context. The skill is just as the mechanism for including context. Some of them look like templates. Like, “Here’s how you write the kind of email that we send every Friday to management.” And it’s like a template. It will always look the same, but you’ll put different information there. There are other things that look more like documentation, like a library. Like, “Here’s everything you need to know about Project X.”
Marcelo: Like a style guide or...
Eleanor: Yeah, exactly. A style guide or a history or something like this. And there are others that look more like workflows. Whenever we need to do something, you do one, and then you do two, and then if X you do three, and if Y you do four. So that’s more like workflow. All of these things look the same. They’re just a bunch of text. But your expectations for how the AI will use it are a bit different.
Marcelo: How do you strike the balance of giving structure to your context, but also allowing the LLM do what it does best, which is being non-deterministic and creative, right? So that way you’re not putting it in a box, and then you will never get anything new. So, how do you strike that balance when you’re building context of letting it do its non-deterministic thing within a structured framework?
Eleanor: Yeah, that’s a brilliant question, right? Because that’s exactly where you need to get a feel for it. Where there isn’t like a clear rule you could follow every time. Again, by analogy, think about your colleague. You’re teaching your colleague to do something. If you’d give them like a flowchart of exactly what to do... First of all, they’re not going to like you very much because no one likes being told what to do in this way. But also, they’ll never have any original thought. Right? I worked for years as a manager. The first thing I learned is don’t tell people what to do because they’ll stop thinking, they’ll stop being creative. Tell them what needs to be done.
So that’s one way of thinking about it. Communicate to the AI the intent. What do you want? Like, what does it mean for this to be good? What are you trying to achieve? What are your goals? Then there are some things, you know... it always needs to... the title always needs to be in block capital in colour blue. That’s not negotiable. But everything else maybe it can be.
When you are able to communicate your intent, communicate your constraints, but not dictate everything, you’re benefiting from the intelligence and the creativity of the model. You’re letting it think for itself, be inventive sometimes. It’s not always obvious the first time, so you iterate. You try different options, you test them again and again. A lot of what you do with AI is kind of empirical. You try something and you see what the results are like, and then maybe you change something and you try again until you get it right. But after a while you also develop intuitions for how to do this well.
Marcelo: It’s interesting you said that because in my Cody product builder that guides users from zero to one building products, originally I had a very strict “Make sure you ask these ten questions.” And then I learned about outcome-based engineering, where you don’t tell a task a particular [set of] questions. What I ended up doing is setting up ten categories that the AI must have enough information about before it can move forward to the next step in the workflow. So we went from “Ask these ten questions” to “Make sure you satisfy enough information you need based on these ten categories.” And now it asks whatever questions it needs to be able to satisfy that outcome.
Eleanor: That’s a great example, right? So if you ask me something I already told you, that’s annoying and it wastes time. But if I said in the context, “These are the things that are needed for completing this task,” then you can decide, okay, these things I could already infer—they’re obvious, or you told me them—and these other things I’m going to have to ask you. So that’s a great example.
Marcelo: Yeah. And it’s almost... like you mentioned this before, where this is how you should really manage people: Don’t tell them exactly how to do it. Tell them what you want the outcome [to be] and then let them use their creativity to figure out the best way to do it. Of course, you can give guidance, right, as you go along if asked. But at the end of the day, within professionals, we don’t tell each other “Exactly do it these ten ways.” We say, “Hey, this is what we need. This is why we need it. Now you go for it and figure out the rest.”
Eleanor: Yes. And that’s exactly how we should work with AI agents. It’s more robust. Or maybe “antifragile” if you’d like to use this nice term. When we define everything very rigidly, it’s brittle. Something is going to break. It will not work exactly like we expected, and then what? But when we communicate intent, what does a good outcome look like? What are the constraints? Then it’s adaptable. It can figure out what to do based on the context.
Marcelo: And it seems the better a person gets at context engineering and speaking to AI, it actually can translate into being a better communicator, at least in a professional manner, at work.
Eleanor: Perhaps. I’d be a bit careful. We shouldn’t talk to other people like we talk to machines. But it does help you think... I mean, I guess it makes you a good manager. There are things that people like me who spent many years trying to become good managers can now probably learn in a few weeks because you very quickly iterate in a way you can’t with people.
Marcelo: Right. That totally makes sense. And I agree with you, we shouldn’t talk to humans like we talk to machines. But there is a little bit of, “Oh, I see, if I give this extra more information, it understands me better. So maybe I should do that also in real life.”
Eleanor: Yeah, definitely.
Marcelo: So what roles do you think leadership can play in a company to support context engineering efforts?
Eleanor: Have an explicit role of a Context Owner. Sometimes it can be a small group depending on the size of the organisation. It can be either one person or a group that’s responsible for it. And the reason for that, first of all, adoption of AI is not equal. Some people need a bit longer to get used to it. Some people are very eager, they’re enthusiastic about it and they want to move forward. And so it’s a perfect role for someone who’s very enthusiastic about AI and they’re asking, “How can I help our team adopt AI faster and better?”
And ownership means accountability and responsibility. So someone to think, “Okay, it’s my role... Like, everyone here will be using the AI systems, and if I didn’t make sure that I prepared the context... Doesn’t mean that I have to do it all myself, maybe I have to review contributions or make decisions... but it’s on me to make sure that the AI system that everyone is using has proper context.” This works really well. I think in my experience, this is usually what unlocks, at least initially, the first steps. You’re not requiring everyone to be equally good at this. You’re saying it is an important function for us, it’s something that will matter for all of us, but here, Marcelo is going to take it on himself and we’re very lucky to to be able to to benefit from his work.
Marcelo: Now, there’s a couple of ways of working with AI, right? One of it is where kind of like Claude Code or in Copilot or whatever you choose, where it’s very interactive. And what I mean by that is you tell it through context engineering and prompt engineering, you tell it to do certain things, it comes back and you iterate and it involves the human. It’s the human in the loop constant iteration, right? But the other way, which is starting to take off quite a bit, right, is sending autonomous agents to go make a... do a task, perform a task, and then come back with the finished task. No interaction with the human. I would assume that context engineering, context building, are very different for each of those approaches.
Eleanor: I don’t think context engineering should be different, but I think that the ability to specify a task, to communicate intent, is really much more important when you are delegating to an autonomous agent. And in a way, this is the key, right? So I think there is a lot of focus right now on this interactive work and what people call maybe “vibe coding.” It’s very exciting. Like emotionally, it’s kind of amusing to do this. I think it really... right now there’s a lot of interest in it from people who are experiencing it for the first time.
But the truth is, to really benefit from AI, it doesn’t help necessarily to have a human in the loop keeping prompting and trying again and again and again. This is actually... introduces a lot of randomness. It doesn’t improve the skills of the person doing the work because they are not getting better at specifying things clearly. And it’s not improving the quality of the work done by the AI because that’s exactly what isn’t helpful to AI.
So I think it’s a good introduction for many people to experiment a little bit with a chatbot or with Claude Code or something like this in a very interactive, turn-by-turn mode. And after a while, when people want to really make the most of working with AI, really unlock productivity improvements, really get better quality software and get better collaboration when working in a team or in a larger distributed project, this happens much better by figuring out how to delegate complete tasks with, again, strong intent communication and clear context. That works a lot better.
Marcelo: Yeah, definitely. A couple more questions, we’re almost at the end of the podcast here. I wanted to get a couple of terms out of the way because I think these are terms that people will hear about when they’re building context and they’re new to it, and they may not be aware of. But maybe you can address Context Pollution and Context Window itself.
Eleanor: Context Window is the amount of text the model can look at at any given time. And it is finite. It is advertised depending on which model somewhere around 200,000 tokens, which are these like sub-word units, to even a million. Effectively it’s less. So I would say something like hundreds of thousands of these tokens. You can imagine it’s something like maybe 300,000, 400,000 words in English. Now that’s a lot. It’s a huge amount of text, but it’s also limited. It’s finite.
If you really want to have a lot of context for your project or maybe for an entire organisation, it would have to be dynamic. You can’t stuff it all at once into the context window. And so that’s worth being aware of. And this is why a lot of people are thinking now about techniques and approaches for loading context dynamically and progressively. Agent Skills being the now the standard really solve this problem very elegantly by not loading it all at once, but giving it a title and a description and allowing the model to load it dynamically when needed. In other cases, is by running searches maybe on a database or on an intranet or anything like this, and getting it into the model on demand when it needs it.
Marcelo: And if you do load it all at once, that’s where you can get into Context Pollution, right? Maybe explain a little bit about that.
Eleanor: Yeah. It’s not helpful. It wouldn’t be helpful to you if I gave you information that is wrong and contradictory. Because then you’d have to think like, “Okay, so here it says that it’s black and there it says that it’s white. Which is which? I’m going to have to figure it out.” So the more precise you can be in the context, the better. And it’s not always so trivial because in our world there is a lot of ambiguity and contradiction. So if we can spare the model this ambiguity and contradiction and not expose it to information that is confusing, we are going to get better results.
Marcelo: Right. And it’s also kind of like when you give information that is... that has nothing to do with what you’re trying to do, right? So if I’m telling you to paint the house and I give you a background of when and where I was born, I mean, that’s interesting information, but it has nothing to do with painting the house.
Eleanor: Sure. But I don’t know if it’s going to be detrimental to the performance of the model except for the fact that this is additional context. And the performance of models, it does degrade the more context you add.
Marcelo: Right. So where do you see context engineering evolving? How do you see it evolving in the future?
Eleanor: I think it’s becoming more and more dynamic, right? So with agents now, we’re allowing... we’re giving the agents more control to go and explore. If in the past, pretty recent past, maybe a year ago, context engineering was a lot about building a retrieval system where we very precisely decide what to feed into the model. That still exists, but in more dynamic systems, if you work for example with an agent like Claude Code or Copilot or any of these things, you don’t do too much of that. You’ll just give pointers to the agent and tell it, “There is information here, go get it if and when you need it.” So that’s definitely a trend we’re seeing. And it depends a bit on scale. When you work at large scale, like real engineering projects where you need to, I don’t know, serve millions of users efficiently, you would invest a lot more rather than do it dynamically because you want to cut latency, you want to cut costs, you want to have some guarantees about the quality of what you’re producing. Whereas in interactive work between a person and an agent, you have a lot more flexibility, right? If the agent got something wrong, I can nudge it. I can tell it, “Oh well, that’s not quite it. Try again.”
Marcelo: Well, Eleanor, thank you so much for being on the podcast. This was a great conversation. I really appreciate it.
Eleanor: Thank you for inviting me. It was great.
Marcelo: And if people want to get a hold of you, we have links on the podcast page, but do you want to give any other kind of link?
Eleanor: Sure. I think the best thing is go to agentic-ventures.com. That’s my platform for helping people work with Agentic systems. A lot of stuff I’m really excited about, including opportunities both in workshops and in offline content to learn about Agent Skills and the Agentic platform. So come and get it.
Marcelo: Definitely. Well, thank you so much, Eleanor. I really appreciate it.
Eleanor: Thank you, Marcelo.


