2016 was a pivotal year for Salesforce. That was when the company acquired MetaMind, "an enterprise AI platform that worked in medical imaging and eCommerce images and NLP and a bunch of other things, a horizontal platform play as a machine learning tool for developers," as founder Richard Socher described it.
If that sounds interesting today, it was probably ahead of its time then. The acquisition propelled Socher to Chief Data Scientist at Salesforce, leading more than 100 researchers and many hundreds of engineers working on applications that were deployed at Salesforce scale and impact. AI became an integral part of Salesforce's efforts, mainly via Salesforce Einstein, a wide-ranging initiative to inject AI capabilities into Salesforce's platform.
Besides market-oriented efforts, Salesforce also sponsors "AI for good" initiatives. This includes what Salesforce frames as a moonshot: building an AI social planner that learns optimal economic policies for the real world. The project going under the name "AI Economist" has recently published some new results. Stephan Zheng, Salesforce Lead Research Scientist, Senior Manager, AI Economist Team, shared more on the project background, results and roadmap.
Zheng was working towards his PhD in physics around the time that deep learning exploded -- 2013. The motivation he cited for his work at Salesforce is twofold: "to push the boundaries of machine learning to discover the principles of general intelligence, but also to do social good".
Zheng believes that social-economic issues are among the most critical of our time. What attracted him to this particular line of research is the fact that economic inequality has been accelerating in recent decades, negatively impacting economic opportunity, health, and social welfare.
Taxes are an important government tool to improve equality, Zheng notes. However, he believes that it's challenging for governments to design tax structures that help create equality while also driving economic productivity. Part of the problem, he adds, has to do with economic modeling itself.
"In traditional economics, if people want to optimize their policy, they need to make a lot of assumptions. For instance, they might say that the world is more or less the same every year. Nothing really changes that much.
That's really constraining. It means that a lot of these methods don't really find the best policy if you consider the world in its full richness if you look at all the ways in which the world can change around you", Zheng said.
The Salesforce AI Economist team tries to tackle this by applying a particular type of machine learning called reinforcement learning (RL). RL has been used to build systems such as AlphaGo and is different from the supervised learning approach that is prevalent in machine learning.
"In supervised learning, somebody gives you a static data set, and then you try to learn patterns in the data. In reinforcement learning, instead, you have this simulation, this interactive environment, and the algorithm learns to look at the world and interact with the simulation. And then from that, it can actually play around with the environment, it can change the way the environment works", Zheng explained.
This flexibility was the main reason why RL was chosen for the AI Economist. As Zheng elaborated, there are three parts to this approach. There's the simulation itself, the optimization of the policy, and then there is data, too, because data can be used to inform how the simulation works. The AI Economist focused on modeling and simulating a simplified subset of the economy: income tax.
A two-dimensional world was created, modeling spatial and temporal relations. In this world, agents can work, mining resources, building houses, and making money that way. The income that the agents earn through building houses is then taxed by the government. The task of the AI Economist is to design a tax system that can optimize for equality (how similar people's incomes are) and productivity (sum of all incomes).
Salesforce's research shows that AI can improve the trade-off between income equality and productivity when compared to three alternate scenarios: a prominent tax formula developed by Emmanuel Saez, progressive taxes resembling the US tax formula, and the free market (no taxes). As Zheng explained, those 3 alternatives were coded into the system, and their outcomes were measured against the ones derived from the AI via the RL simulation.
Although this sounds promising, we should also note the limitations of this research. First off, the research only addresses income tax in a vastly simplified economy: there is no such thing as assets, international trade and the like, and there is only one type of activity. In addition, the total number of agents in the system is a maximum of 10 at this point.
The AI Economist is an economic simulation in which AI agents collect and trade resources, build houses, earn income, and pay taxes to a government.
SalesforceZheng noted that the research considered many different spatial layouts and distributions of resources, as well as agents with different skill sets or skill levels. He also mentioned that the current work is a proof of concept, focusing on the AI part of the problem.
"The key conceptual issue that we're addressing is the government trying to optimize this policy, but we can also use AI to model how the economy is going to respond in turn. This is something we call a two-level RL problem.
From that point of view, having ten agents in the economy and the government is already quite challenging to solve. We really have to put a lot of work in to find the algorithm, to find the right mix of learning strategies to actually make the system find these really good tax policy solutions", Zheng said.
Looking at how people use RL to train systems to play some types of video games or chess, these are already really hard search and optimization problems, even though they utilize just two or ten agents, Zheng added. He claimed that the AI Economist is more efficient than those systems.
The AI Economist team are confident that now that they have a good grasp on the learning part, they are in a great position to think about the future and extend this work also along other dimensions, according to Zheng.
In an earlier version of the AI Economist, the team experimented with having human players participate in the simulation, too. This resulted in more noise, as people behaved in inconsistent ways; according to Zheng, however, the AI Economist still achieved higher quality and productivity levels.
Some obvious questions as far as this research goes are what do economists think of it and whether their insights were modeled in the system as well. No member of the AI Economist team is actually an economist. However, some economists were consulted, according to Zheng.
"When we first started out, we didn't have an economist on board, so we partnered with David Parkes, who sits both in computer science and economics. Over the course of the work, we did talk to economists and got their opinions their feedback. We also had an exchange with [economist and best-selling author] Thomas Piketty. He's a very busy man, so I think he found the work interesting.
He also raised questions about, to some degree, how the policies could be implemented. And you can think of this from many dimensions, but overall he was interested in the work. I think that reflects the broader response from the economic community. There's both interest and questions on whether this is implementable. What do we need to do this? It's food for thought for the economics community", Zheng said.
As for the way forward, Zheng believes it's "to make this broadly useful and have some positive social impact". Zheng added that one of the directions the team is headed towards is how to get closer to the real world.
On the one hand, that means building bigger and better simulations, so they're more accurate and more realistic. Zheng believes that will be a key component of frameworks for economic modeling and policy design. A big part of that for AI researchers is to prove that you can trust these methods.
"You want to show things like robustness and explainability. We want to tell everyone here are the reasons why the AI recommended this or that policy. Also, I strongly believe in this as an interdisciplinary problem. I think really the opportunity here is for AI researchers to work together with economists, to work together with policy experts in understanding not just the technical dimensions of their problem, but also to understand how that technology can be useful for society", Zheng said.
Two aspects that Zheng emphasized about this research were goal-setting and transparency. Goal-setting, i.e. what outcomes to optimize for, is done externally. This means that whether the system should optimize for maximum equality, maximum productivity, their equilibrium, or potentially in the future, incorporate other parameters such as sustainability as well is a design choice up to the user.
Zheng described "full transparency" as the cornerstone of the project. If in the future iterations of these types of systems are going to be used for social good, then everyone should be able to inspect, question and critique them, according to Zheng. To serve this goal, the AI Economist team has open-sourced all the code and experimental data based on the research.
Another part of the way forward for the AI Economist team is more outreach to the economist community. "I think there's a fair bit of education here, where today economists are not trained as computer scientists. They typically are not taught programming in Python, for instance. And things like RL might also not be something that is part of their standard curriculum or their way of thinking. I think that there's a really big opportunity here for interdisciplinary research," Zheng said.
The AI Economist team is constantly conversing with economists and presenting this work to the scientific community. Zheng said the team is working on a number of projects, which they will be able to share more about in the near future. He concluded that a bit of education to make people familiar with this approach and more user-friendly UI/UX may go a long way.