Skip to main content

EdgeworthBox announced its Generative-Procurement-as-a-Service offering for the Request for Proposals business process last month. We have built Generative AI tools to address the workflows of the three stages of a reverse auction: generating a first cut of a statement of work, drafting a supplier’s proposal, and making an initial cardinal ranking of the final supplier bids. We are offering this as a service, to start.  These tools complement our flexible RFx platform focused on the group dynamic in procurement events.

This will enable us to refine our understanding of the customer workflows and to identify the different players on both sides of the table.

This is not our ultimate ambition.

We are developing Agentic AI models.

Before we can talk about what an Agentic AI approach entails, let’s start by talking about what some have called agent-based models. This is key to understanding both the status quo and the truism of the failure of almost every digital transformation project.

According to its most voluble proponent, agent-based modeling is a human-led approach to solving problems that stands in contrast to what he calls “equation-based modeling.” In the quote below, agents are individuals. They are human beings.

“Agent-based modeling (ABM) simulates the interactions of individual agents within a system, allowing for the study of emergent behavior that arises from these interactions. ABM is particularly valuable for capturing complex, system-wide effects that result from the interactions of autonomous agents.”

Strip out the jargon and the SAT words and the logic here is straightforward. The reason why so many software solutions and digital transformations fail is because they are inflexible. They do not account for the complex interaction of multiple agents using data and intelligence from a wide variety of sources. Every procurement team is different. Every project exists in an idiosyncratic context with unique institutional conditions.

Yet, traditional software approaches attempt to force people into a standard, one-size-fits-all approach to problem solving. This is good for the software vendor. He makes a fixed cost investment. Build a tool and sell it to millions of people. In forcing every different type of imaginable peg into their single round hole, the software vendor generates stratospheric gross margins for themselves. Operating leverage is a beautiful thing. For them.

You can have whatever car you want, as long as it is a Lada.

The end result is that large incumbent procurement systems cost a bucket load to implement and they still don’t work. Their vaunted inadequacies make everyone puck shy when it comes to technology.

The essential complaint that the agent-based modeling proponent appears to make is that procurement is a services problem because of its complexity. Software vendors solve for a software problem.

If a fixed income trading desk needs a machine to calculate the yield-to-maturity or duration of a bond, then you can build software for that. This is an example of a software problem.

If another company needs help understanding the risk of self-insuring the healthcare needs of its workforce, giving them an Excel spreadsheet isn’t going to be as helpful as hiring a team of actuaries to model out the risk based on situation-specific data, using their experience of other such problems and their special tools. This is an example of a services problem.

(As an aside, we built EdgeworthBox to be more flexible and to take into account explicitly the need for teams from differing backgrounds to interact around data in a contained space. We come from a financial markets background. We wanted to build a Bloomberg terminal for the real economy. We either work as a standalone solution or as a complement to the existing procurement technology. Ideally, we help people impose an idiosyncratic interaction on software designed for a standard use case.)

Procurement is a complex beast. The combination of simple building blocks and the peoplewho work these problems introduces complexity. This produces emergent behavior as people and data collide. No two snowflakes are identical.

What is Agentic AI then? There is plenty of confusion about this in the procurement literature, so I will refer to the people who are on the cutting edge of investing in Generative AI. Here’s an outstanding note from Sequoia, the venture capital firm based in Silicon Valley. I’ll try to explain the technology in plain English.

If you take away one conclusion from this note, it should be this key message. Agentic AI isn’t software-as-a-service. It is service-as-software. It is a fundamental disruption of the markets for consulting and outsourcing. Some misunderstand it. They descry it as a self-learning robotic version of the standard one-size-fits-all software approach, peddled by hustlers and snake oil merchants. Their skepticism stems in part from the novelty of Agentic AI and Generative AI and in part from the dismal history of digital transformation. The critics of Agentic AI see it as the same old priest preaching a new religion. This view is naive and it is wrong.

 

When most people think of Generative AI, they think of chatbots like ChatGPT or Claude. You ask the bot questions and it gives you answers. You might ask it to draft a cover letter for a job application or to edit some marketing copy. Perhaps you ask it to generate a PowerPoint slide deck.

All of this relies on the first wave of Generative AI: the foundation models such as OpenAI’s GPT-4o. These are pre-trained models. The AI companies pour as much data as they can find into as many GPUs as they can align to teach these models how to identify patterns. They are essentially built to predict the next word in a sequence, over and over and over again. They are not infallible. They “hallucinate” or make things up. They organize data into galaxies of multi-dimensional vectors. Think of this as space but with, say, 548 dimensions. It’s impossible to envision, I know.  When we ask for things, the models compose paths between these virtual, conceptual asteroids and planets and stars as they combine words and phrases to make sentences, paragraphs, and documents. They are better with some types of data, like software code, than others. Some of these navigational tracks turn out to be roads to nowhere. But that’s not to dismiss these magnificent machines. The model vendors are focused on identifying and weeding these rogue routes. They will succeed over time.

Sequoia makes the analogy to System 1 thinking in human brains. Foundation model behavior is nothing more than knee-jerk pattern recognition, not the in-depth reasoning of System 2.

Agentic AI is System 2. It breaks projects down into tasks. Think of the agents as people working on your team. Each one has a role to play. In a procurement, one team member may be tasked with developing a list of suppliers relevant to the problem. Another team member may be in charge of finding all the documents from previous related projects. A third team member may be responsible for collecting documents related to similar purchases, say from government databases disclosing contracts and bid solicitations from the public sector. We give them tools. We give them access to databases and the Internet.

The key to a successful Agentic AI implementation will be the way we coach these new artificial team mates to do their jobs. We will give them instructions in the form of natural language prompts that include prior examples of work that was good and work that was bad. We will expose the chain-of-thought that each agent needs to undertake. The agents will have pre-trained DNA from multiple models. They may have chromosomes from OpenAI and also from Anthropic. But in addition to this nature, we will nurture them. We will educate them as we would a child through school. They are inference compute, performing in-the-moment calculations that leverages their education to take advantage of the gifts of the dull calculus pre-trained compute necessary to birth them.

When they graduate from our school, when we determine that they have learned how to apply our various courses in working different kinds of problems, we will unleash them on the real world.

I can hear the Luddites now. This is nothing more than a jumped-up version of the Generative AI foundation models. They will not be any different. This is just marketing.

Wrong.

When I say “we” will nurture them, I mean human beings. This is called Reinforcement Learning with Humans in the Loop. A human expert (not a machine) will now build Agentic AI teams of agents designed for the idiosyncratic circumstances of the individual client. It is a human being who will coach them, iterating through different versions of a task-specific way of thinking about a particular problem: procurement within the context of an individual enterprise buying in a specific category. No two contexts are the same.

The Reinforcement Learning combined with human re-engineering of the application that sits on top of the foundation model will make for customer-specific AI. This isn’t the machine teaching itself based upon some incorrect reward model. This is human-in-the-loop optimization of the AI architecture for applications built for individual customers. There will be thousands of these apps. They may appear similar, but they will reflect the unique context of each customer.

One consultant will do the job of five. Perhaps company A’s Agentic AI setup can be recycled for subsequent RFx events. Maybe it will need to be tweaked for a different category. The human will take the initial feedback from the agents and grade their work, like a teacher in high school. The agents will learn. They will develop their own voice.

 

In practice, the agents will do the heavy lifting at the beginning, but it will be a human who oversees them, who coaches them, and who completes the execution.

The agents will use tools, in many cases SaaS solutions designed for specific tasks, e.g., making a payment to a vendor. SaaS vendors will have their own agents, riding astride their software.

Every company will have its own agents who have grown up inside the context of the individual firm. No two groups of agents will be the same.

The business model will be different, too.

Back in the day, software vendors charged for enterprise licenses and maintenance. Then cloud computing came along and vendors switched to charging for seat licenses. Now, Agentic AI will charge for outcomes. This is like paying a consultant on contingency. Imagine a company that comes to you and says, I have knowledge of the sales tax in the Province of Ontario. Let me go through your historical filings. If I find anything, I’ll take half of what I find. It’s a windfall for you, after all. If I find nothing, I will not charge you a penny. In this case, companies will pay Agentic AI for each procurement event that they prosecute to completion under human supervision.

Read the Sequoia paper. Discount it as Silicon Valley claptrap at your own peril. Ignore it and you risk ending up as a quaint telegraph salesman lamenting the use of fax machines. It does a wonderful job of explaining how Agentic AI will teach agents to have task-specific “cognitive architectures” that build upon the core pattern recognition of their foundation model ancestors.

This is why EdgeworthBox is starting with Generative-Procurement-as-a-Service. We want to map out as many different procurement approaches as possible with a view to this idiosyncratic Agentic AI customization in the future. We envision armies of enterprise-specific Agents executing in EdgeworthBox to exploit our repositories of structured data, communicating with their human-in-the-loop overlords using our messaging tools.

We’re open to talking to anyone about this. Give us a shout. Check out EdgeworthBox for more information.

Chand Sooran

Was this article helpful?
YesNo

© 2022 Homework Fairy. | All Rights Reserved.