Decision science and data science often get mistaken for one another. Both fields require many of the same skills, and there are similarities in how they approach analysis.
But here’s the basic difference: Data scientists identify actionable insights from large sets of data. They use a variety of techniques like advanced statistics and machine learning to spot patterns and forecast trends. Their chief concern is extracting actionable insights.
For decision scientists, data is one tool to facilitate decision-making. Decision scientists, who are skilled not just in data science but also in decision theory, behavioral psychology, and econometrics, differ in that they’re primarily concerned with making insights usable to facilitate faster decision-making within an organization. Their chief concern is the use case of the insights.
These days, many FAANG companies are hiring decision scientists or training data science professionals in decision theory. In fact, in 2018, Google added a Chief Decision Officer.
There are two reasons why:
Here’s an example: Facebook employs decision scientists to guide marketing decisions and measure marketing success, which can be difficult to do accurately with traditional tools. A decision scientist at Facebook, therefore, might design a randomized controlled trial (RCT), like an advanced A/B test, to determine the value of each channel and ultimately help the marketing team best allocate its budget.
We took the opportunity to chat with a decision science Ph.D. candidate about this interesting field (and how it relates to data science). Sandeep Gangarapu, a sixth-year Ph.D. candidate at the University of Minnesota and former Google Search analyst, is studying Information and Decision Science. He shared some insights about the field, his research into A/B testing and multi-armed bandits, and why interest in decision science is growing.
The Information and Decision Sciences department started in the 1970s, just as technology was starting to replace certain processes inside companies. Researchers in the field were thinking about questions relating to how people use technology to make decisions and its effect on human and organizational behavior.
A basic decision science question would be, “Is email a good or bad thing?” On one hand, it facilitates fast, asynchronous communication and information sharing, but it could also distract employees and hamper their work-life balance. The punchline is to causally measure the effect of X on Y while studying the underlying theory that drives the decision-making. But how can you causally attribute if the email is good or bad with data? That’s a hard question. Researchers in this field try to answer these kinds of hard economic and social science questions.
Nowadays, there’s a lot of research looking at how online platforms influence our decisions and the impact of information systems on society and policy. But, more gradually, we’ve begun expanding into other areas, like my work on multi-armed bandits and heterogeneous treatment effects in A/B testing.
In short, decision science is akin to causal inference. This is an oversimplification, but a good one at that.
The Information and Decision Sciences department started in the 1970s, just as technology was starting to replace certain processes inside companies. Researchers in the field were thinking about questions relating to how people use technology to make decisions and its effect on human and organizational behavior.
A basic decision science question would be “Is email a good or bad thing?” On one hand, it facilitates fast, asynchronous communication and information sharing, but it could also distract employees and hamper their work-life balance.
The punchline is to causally measure the effect of X on Y while studying the underlying theory that drives the decision-making. But how can you causally attribute if email is good or bad, with data? That’s a hard question. Researchers in this field try to answer these kinds of hard economic and social science questions.
Nowadays, there’s a lot of research looking at how online platforms influence our decisions and the impact of information systems on society and policy. But, more gradually, we’ve begun expanding into other areas, like my work on multi-armed bandits and heterogeneous treatment effects in A/B testing.
In short, decision science is akin to causal inference. This is an oversimplification, but a good one at that.
The main focus of my thesis has been how can companies better use A/B testing, and the motivation came from asking ourselves this question:
“We know that A/B testing works and can help companies innovate really fast. What’s preventing people from getting there? Why isn’t every company doing it?”
The first insight I found has to do with the size of the company. Smaller companies just don’t have the capacity or skilled people to do A/B testing. Another big hurdle is the lack of experimentation culture in the company. Most decisions are made by the highest-paid person in the room. Getting them to test a change before rolling it out takes a paradigm shift in the company. And finally, there’s the issue of cost: the infrastructure cost of setting up the experimentation pipeline, running experiments, and analysis is a bottleneck.
My research is trying to address these problems and provide methodological ways to soften these hindrances. One path we taken is to squeeze more utility (profits) out of experimentation and have simple frameworks that make use of advanced techniques making things more approachable. We developed a framework that combines A/B testing with machine learning and optimization to solve the above problem. This framework is easy for companies to understand, easy to implement and easy to share with stakeholders – that can help quite a bit.
A multi-armed bandit according to Optimizely is:
A ‘smarter’ or more complex version of A/B testing that uses machine learning algorithms to dynamically allocate traffic to variations that are performing well, while allocating less traffic to variations that are underperforming.
I am focusing on multi-armed bandits. So again, the problem there is that if it’s already hard for companies to implement A/B testing, how do we dream about a multi-armed bandit solution? The idea was to implement something in between using the hook that multi-armed bandits are utility (profit) maximizers that companies deeply care about. We developed a hybrid algorithm that trades off utility and inference on how each variant performed.
Essentially, bandit algorithms have very good utility, but because the allocations are adaptively made, the inference is tough. You don’t always get the concrete statistical evidence that you do with A/B testing. So the algorithm trades off some utility in order to buy a lot of inference. So, ultimately, you enjoy the best of both worlds.
I would say I am in a slightly better position compared to others in decision sciences owing to my research in machine learning and previous coding experience at Google. But in the decision sciences program, you are mostly trained on causal inference concepts. That might close doors to positions like applied and research scientists where they may require some solid coding skills.
One thing I’ve had to learn is how to be more efficient as a problem solver and come out of a research mindset into an interview mindset. Previously, I would get right into solving a question given in an interview without thinking much and provide an answer rather than the answer.
One way I’ve been working on that is practicing with lots of mock interviews. They’ve taught me how to take a step back, ask some questions, get clarification, and then go through step by step with the interviewer and arrive at an answer together.
When I was working at Google, I wanted to get a master’s in data science. I wasn’t even thinking about getting a Ph.D., but I ended up talking to an ex-colleague who was in Information and Decision Sciences program at CMU. I think what hooked me was the depth of the learning I would have and the opportunities that will be open at the end.
With a master’s, you can complete it in a year, have strong baseline knowledge, but there’s always a chance you hit the skill ceiling just years after you graduate. With the Ph.D., I thought it would probably be three more years [than a master’s], and I knew I’d be able to do deep research to really build my expertise. Again, it’s a tradeoff. A Ph.D. is a huge investment and a big life decision and is completely personal.
I wouldn’t say that it would make a candidate more competitive at least for now, although I am seeing Ph.D. in a variety of disciplines listed as a recommendation for a lot of tech jobs lately.
In terms of decision science, I think companies are starting to realize that there are a lot of things that are hard to measure and that it can be hard to make data-driven decisions. People who are trained in decision science have the skills to approach those questions and determine the best path forward. Once many machine learning problems and frameworks are standardized, the differentiating factor would be the skills to solve hard, not so obvious problems and I think decision scientists are perfectly positioned for that.
In-depth knowledge of A/B testing and causal statistics are key tools used in both decision science and data science. Develop your skills with our A/B testing course, as well as our data science interview questions.
For more about decision science, see this Harvard overview: What is Decision Science?