Earlier this week, I attended the O’Reilly AI Conference up in San Jose, CA. Wednesday and Thursday started off with keynotes showcasing what companies were currently researching in the field of AI. While I’m no expert in the field, I found four key takeaways from the keynotes.
Beyond Fully Supervised/Unsupervised Learning
For those new to machine learning, the two most common categories of algorithms are supervised and unsupervised learning. In supervised learning, data contain labels that can be used in training. For example, given the gender and the height of a student, we can use linear regression to determine the weight of a student. In unsupervised learning, data doesn’t contain labels. Unsupervised is used more in clustering and dimensionality reduction.
While both types of algorithms are useful, information in the real world aren’t always as straightforward in terms of representation. The majority of data isn’t label and labeling data is a time and cost intensive job. Additionally, there is so much data available that it’s unrealistic to label it all.
So, we do we go from here?
Within the supervised/unsupervised area, there are a couple in-between areas. For example, there is weakly and semi-supervised learning. Weakly supervised learning uses few pieces of data, usually dirty, to act as a labeling classifier on data. Semi-supervised learning tends to use a combination of labeled and unlabeled data to train models. An interesting case of supervised learning is called self-supervised learning. In this case, a self-supervised learning algorithm will use data to act as the supervision instead of a human. Data is labeled according to how it relates to other data.
Outside of supervised/unsupervised learning, you have reinforcement and deep reinforcement learning. Reinforcement learning allows agents to learn from their environment without getting a human involved. Traditional reinforcement learning uses methods such as Multi-Arm Bandits and Markov Models to learn. In the case of deep reinforcement learning, neural networks are used to act as a backend to the agent’s learning model. Reinforcement learning is a very complex topic that would require a lot of background knowledge in machine learning and mathematics.
Pacing of AI Computing
Throughout the conference, I kept hearing about the computations required by AI doubling about every three months. This is much faster than Moore’s Law, which was claimed to have died by several speakers. Why is this a big deal?
Today, we can already solve trivial AI problems such as classifying animals. However, in the real world, nothing is as easily representable. Real world scenarios require hundreds of neural network layers as well as billions of parameters. Top that with scaling problems and training AI becomes a very timely and expensive process.
This has lead to developments in training models on GPU, companies producing tensor processing units (TPU) to greatly increase the speed of training, and software frameworks that allow for distributed applications and training. There was even a case where Cerebras Systems had to develop a huge processor consisting of 1.2 trillion transistors, just to dramatically reduce training times to a few days.
The takeaway is that despite impressive advances, infrastructure issues will still need to be solved before much more powerful AI can become feasible.
Ethics of Building AI
With so many people worried on how AI will behave in the real world, people have been calling for AI to be transparent. It also doesn’t help that several AI projects have receive controversy, such as Microsoft’s Tay, Amazon’s hiring software, and Facebook’s chatbots inventing a new language without developers knowing what happened.
Companies, like Google, Microsoft, and Dataiku, are trying to make sure that AI decisions are transparent, fair, and explainable to users. In addition, they also want to make sure AI won’t harm people.
In the case of Google, they make AI transparent by following guidelines from two research papers:
Model Cards for Model Reporting – Making machine learning transparent
Datasheets for Datasets – Making data transparent
If you’re interested in learning more about these guidelines, click the links above.
Open-endedness
Why do we use machine learning? Simple, to solve problems.
Now, why do we think of machine learning as only a tool to solve problems?
Instead, what if we allowed AI to learning without intending to actually solve problems?
I do admit that the idea is far fetched. In fact, you don’t really hear much about AI just cruising along and doing something interesting out of the blue. Oftentimes, it’s about AI beating a master player at Go or Dota 2. In other words, it solves a problem.
Uber AI Lab’s Kenneth Stanley believes otherwise. He proposes that open-ended AI algorithms could be the key for AI to learn anything. His project, paired open-ended trailblazer (POET), explores a bipedal robot agent walking through various terrain in order to learn how to overcome obstacles. No human is involved in generating the terrain or training the agent. Instead the program just create new problems by generating random terrain and the agent just tries solving them by learning how to navigate the terrain.
Despite this domain not having a lot of research done, I’m interested to delve a bit more into the area. If you’re interested in the idea, he wrote a book on this topic called Why Greatness Cannot Be Planned the Myth of the Objective.
Conclusion
The AI industry is kicking alive and well. Research has increased dramatically in many domains, such as natural language processing (NLP), computer vision, and AI algorithms. There has also be an increased focus on making AI ethical and transparent to consumers, developers, and researchers. I don’t know where the focus on AI will be in 2020, but I found the keynote (and conference, in general) to provide insight on what’s to come.
Comments