Interview with Roman Yampolskiy
Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, and Research Advisor for MIRI and Associate of GCRI. He holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo and a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA.
Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign (New Scientist, Science World Magazine).
BT (Aria Wong): What’s the most important issue within the artificial intelligence community that you think we will face within the next ten years?
Dr. Roman Yampolskiy: Technological unemployment is something that immediately comes to mind, probably the first significant impact from having many jobs becoming automated. This will likely start with things like cab drivers, and slowly grow to cover many professions.
BT: What do you think needs to happen to minimize the damage caused by these issues?
Yampolskiy: It’s good to have a sort of social safety net. Labor of automated robots can be taxed, the proceeds distributed to people who lost their jobs. This can initially be used for retraining of workers and later on apply to maintenance and support of displaced workers. Unconditional basic income is a potential solution that comes up afterwards.
BT: There has been some criticism regarding how universal basic income may reduce the incentive to work. What are your thoughts on that?
Yampolskiy: It is a problem. In a way it’s kind of like the welfare system today. My personal intuition is that people can be divided into two types: those who hate their job and do it to put food on the table, and those who love their job and would do it for free. Automation impacts those two groups of people in very different ways. Some people will enjoy getting free checks and do what they would do instead of working, others will probably continue to work even for free.
BT: Do you think there may be not enough work being done under a universal basic income scheme, or do you think that’s less of an issue?
Yampolskiy: Well if you’re getting free money, you’re not the kind of person to do research for fun and you previously had a very boring manual-labor type of work, chances are you’re not going to do something very productive with your life. You’ll probably fall into a hedonistic loop of pleasures of the flesh.
BT: Considering that your primary field of research is in cyber-security, how does AI play into this field?
Yampolskiy: Right now we’re starting to see AI being used a lot for improving security, for catching abnormal behavior, and some work is now even showing up in terms of cyberattacks. Long term, the problem I’m interested in is security from AI. Right now we use them as tools for both defense and attacking, but in the future the adversary will likely change to AI.
BT: Can you go into a little detail on how that might potentially look? What would you do to minimize the risks?
Yampolskiy: The simplest scenario is where you have a pretty sophisticated intelligence system that gives malevolent orders and now you have to protect against the system penetrating networks and committing crimes. We’re trying to see if we can use techniques developed in cyber-security and forensics to prevent this type of attack. As long as the system is at human level or below the same methods we use against human adversaries should scale to AI. If a system becomes more capable, then nobody will know what to do.
BT: What do you think is likely to happen?
Yampolskiy: It doesn’t look good. If you’re not the smartest thing around, you’re not competitive. So eventually you’re not the one deciding what will happen, the super-intelligent AI will decide what will happen.
BT: Do you think we can do anything now to prepare for this situation now or do we have to wait and see?
Yampolskiy: Well it’s a good idea to start preparing as soon as possible. A lot of people have realized that and started a lot of interesting projects, trying to come up with some solution. I think it’s a bit too early to see how successful they will be. I’m not too optimistic at this point but I’m happy there are a lot of smart people looking at the problem.
BT: Do you think solutions in this realm are more of a technical thing that a few smart people should be working on, or is it something that requires more input from different institutions?
Yampolskiy: I mean it’s good to have lots of people looking at the problem, the more the merrier, certainly.
BT: In terms of the longer term risks, say fifty years, what do you think is the main thing?
Yampolskiy: For me, malevolent design is the biggest concern where people intentionally post harmful content, steal sources, blackmail, and design social engineering attacks.
BT: Moving away from the risks and more to the opportunities, what do you see as the most exciting potential in artificial intelligence?
Dr. Yampolsky: Science is really the domain I’m most interested in. Right now, humans are capable of reading a few papers a year, but we’ll get some of the most interesting and fruitful discoveries from AIs mining existing work, finding patterns we had never seen before.
BT: What do you think the timeline is for AI to be able to generate insights like that?
Yampolskiy:We’re starting to see some progress in that area. Some data mining of research papers regarding potential new drugs and other applications of existing technologies are in the works, butit’s not quite at human level yet. It will get much better soon with additional techniques, such as expanded memory and others of the like.
BT: So it seems like this is likely to happen fairly soon.
Yampolskiy: Yes, it’s starting to happen already, and in the next five or ten years there should be a tremendous explosion in that type of work.
BT: I would like to address jobs in the private sector, particularly those that have not yet been automated and involve more mental labor. Do you think work of this sort can be automated, and if so, how long will it be before we have mass automation of not just physical but mental labor?
Yampolskiy: Well, we often see jobs like tax preparation, which has been considered intellectual labor, become automated to a large extent. Similarly, many jobs such as investing and financial advising would be quite trivial for an AI to accomplish.
BT: What about more complicated decisions like management and government decisions? Do you think such jobs could be automated?
Yampolskiy:There have been many breakthroughs in even developing AIs that can deal with even human-esque decisions, such as business dealings, military decisions, and even government policy. In the future, such dealings can all be done pretty successfully.
BT: Do you think there’s any kind of domain still that will be predominantly human-done?
Yampolskiy:Programming is the last job to be automated. If you can automated programming, then anything else goes. I actually have a paper that says programming is the hardest task for an AI to complete. If you want job security, be a top-notch programmer.
BT: You’re saying if programming is automated, then basically everything has been automated.
Yampolskiy: Right. I can just tell a computer, automate an accountant, automate this, and the computer does the programming and it’s a done deal.
BT: What are your thoughts on the values debate, in terms of how to program our human values into AI?
Yampolskiy: It’s very, very hard on so many levels. It’s hard because we don’t agree on values, it’s hard because we cannot define values, it’s hard because values change all the time and we want them to not be static. Thus, it will be very difficult to actually implement.
BT: Would you say there are major technical hurdles that once overcome, would help with these issues or is it a pretty broad set of complicated tasks?
Yampolskiy: Anytime we zoom in on a problem, we see just as many new problems show up. It always just gets worse.
BT: So there’s not really one central issue?
Yampolskiy: No, it’s not like “You can do this, and everything else will become easy and safe.” Instead, it’s more of “Oh, we have a new safety mechanism, how do we make this mechanism safe?” There are additional problems with that, like interactions between components, which becomes very complicated.
BT: What would you say is the best thing for current undergraduates interested in these issues to do? Do you have any recommendations for the field of work to enter, say AI safety, or is it too early for such considerations? Do you believe it’s better to go to work at a company at the forefront of AI or in a government lab somewhere?
Yampolskiy: All of those are good options. It depends on specifically what you want as long as you’re in this domain. Whether you’re developing safety mechanisms or conducting further research, you can certainly help.
BT: More generally, for students who are interested in AI but not necessarily technology oriented, what advice would you give them?
Dr. Yampolsky: Consider something related to philosophy or ethics. We will definitely need help figuring out what we want with that in relation to AI.
BT: In terms of the business realm, is there any way those people can help?
Yampolskiy: The best way would be to provide resources for research, as funding is a substantial issue.
BT: Are there any additional thoughts you wanted to share?
Yampolskiy: I mean, just take the time to evaluate your major. It’s really sad how many people are in majors that will not exist in a few years or are even dead right now. It’s definitely worth your time to explore more before committing to a specific area.
BT: How does one factor the future into decision-making in the present?
Yampolskiy: There are quite a few studies on what jobs will be automated and how hard they are to be automated. So if you pick something that in five years is predicted to be fully done by machines, it may not be the smartest thing to start your career in. ﹥