TechNews Pictorial PriceGrabber Video Thu Oct 31 20:05:58 2024

0


9 machine learning myths
Source: Mary Branscombe


Machine learning is proving so useful that it's tempting to assume it can solve every problem and applies to every situation. Like any other tool, machine learning is useful in particular areas, especially for problems you’ve always had but knew you could never hire enough people to tackle, or for problems with a clear goal but no obvious method for achieving it.

Still, every organization is likely to take advantage of machine learning in one way or another, as 42 percent of executives recently told Accenture they expect AI will be behind all their new innovations by 2021. But you’ll get better results if you look beyond the hype and avoid these common myths by understanding what machine learning can and can’t deliver.

Myth: Machine learning is AI

Machine learning and artificial intelligence are frequently used as synonyms, but while machine learning is the technique that’s most successfully made its way out of research labs into the real world, AI is a broad field covering areas such as computer vision, robotics and natural language processing, as well as approaches such as constraint satisfaction that don’t involve machine learning. Think of it as anything that makes machines seem smart. None of these are the kind of general “artificial intelligence” that some people fear could compete with or even attack humanity.

Beware the buzzwords and be precise. Machine learning is about learning patterns and predicting outcomes from large data sets; the results might look “intelligent” but at heart it’s about applying statistics at unprecedented speed and scale.
Myth: All data is useful

You need data for machine learning, but not all data is useful for machine learning. To train your system you need representative data that covers the patterns and outcomes your machine learning system will need to handle. You need data that doesn’t have irrelevant patterns included (such as photos that show all the men standing up and all the women sitting down, or all the cars being in a garage and all the bikes being in a muddy field) because the machine learning model you create will reflect those overly specific patterns and look for them in the data you use it with. All the data you use for training needs to be well labelled, and labelled with the features that match the questions you’re going to ask the machine learning system, which takes a lot of work.

Don’t assume the data you already have is clean, clear, representative or easy to label.
Myth: You always need a lot of data

The major advances made recently in image recognition, machine reading comprehension, language translation and other areas have happened because of better tools, computing hardware such as GPUs that can process large amounts of data in parallel, and large labelled data sets, including ImageNet and the Stanford Question Answering Dataset. But thanks to a trick called transfer learning, you don’t always need a large data set to get good results in a specific area; instead you can teach a machine learning system how to learn using one large data set and then have it transfer that ability to learn to your own, much smaller training data set. That’s how the custom vision APIs from Salesforce and Microsoft Azure work: You only need 30-50 images showing what you want to be able to classify to get good results.

Transfer learning lets you customize a pre-trained system to your own problem with a relatively small amount of data.



How to Set Up Your Chief Data Officer for Success

The CDO has emerged as a critical ally to the CIO for accelerating data-driven digital transformation.
Myth: Anyone can build a machine learning system

There are plenty of open source tools and frameworks for machine learning and countless courses showing you how to use them. But machine learning is still a specialized technique; you need to know how to prepare data and partition it for training and testing, you need to know how to choose the best algorithm and what heuristics to use with it, as well as how to turn that into a reliable system in production. You also need to monitor the system to make sure the results stay relevant over time; whether your market changes or your machine learning system is good enough that you end up with a different set of customers, you need to keep checking that the model still fits your problem.

Getting machine learning right takes experience; if you’re just getting started, look to APIs to pre-trained models you can call from inside your code while you acquire or hire data science and machine learning expertise to build custom systems.
Myth: All patterns in the data are useful

Asthma sufferers, people with chest pain or heart disease and anyone who is 100 years old have a much better survival rate for pneumonia than you’d expect. So good, in fact, that a simple machine learning system designed to automate hospital admission might send them home (a rule-based system trained on the same data as a neural net did exactly that). Unfortunately, the reason they have such high survival rates is that they’re always admitted immediately because pneumonia is so dangerous to them.

The system is seeing a valid pattern in the data; it’s just not a useful pattern for choosing who to admit (though it would help an insurance company predicting costs for treatment). Even more dangerously, you won’t know that those unhelpful anti-patterns are in your data set unless you already know about them.

In other cases, a system can learn a valid pattern (like a controversial facial recognition system that accurately predicted sexual orientation from selfies) that isn’t useful because it doesn’t have a clear and obvious explanation (in this case the photographs appear to be showing social cues like pose rather than anything innate).

“Black box” models are efficient but don’t make it clear what pattern they have learned. More transparent, intelligible algorithms like Generalized Additive Models make it clearer what the model has learned so you can decide if it’s useful to deploy.
Myth: Reinforcement learning is ready to use

Virtually all of the machine learning systems in use today use supervised learning; in most cases, they’re trained on clearly labelled data sets that humans have been involved in preparing. Curating these data sets takes time and effort, so there’s a lot of interest in unsupervised forms of learning, especially reinforcement learning (RL) — where an agent learns by trial and error, by interacting with its environment and receiving rewards for correct behaviour. DeepMind’s AlphaGo system used RL alongside supervised learning to beat high-ranking Go players, and Libratus, a system built by a team at Carnegie Mellon, used RL alongside two other AI techniques to beat some of the best poker players in the world at no-limit Texas Hold ’Em (which has a long and complex betting strategy). Researchers are experimenting with RL for everything from robotics to testing security software.

RL is less common outside of research though. Google uses DeepMind to save power in its data centers by learning to cool them more efficiently; Microsoft uses a specific, limited version of RL called contextual bandits to personalize news headlines for visitors to MSN.com. The problem is that few real-world environments have easily discoverable rewards and immediate feedback, and it’s particularly tricking allocating rewards when the agent takes multiple actions before anything happens.

The Adaptive Network is a new approach that expands on autonomous networking concepts to transform the static network into a dynamic, programmable environment driven by analytics and intelligence.
Myth: Machine learning is unbiased

Because machine learning learns from data, it’s going to replicate any biases in the data set. Searching for images of CEOs is likely to show photos of white, male CEOs because more CEOs are white and male than not. But it turns out that machine learning also amplifies bias.

The COCO data set often used to train image recognition systems has photos of men and women; but more of the women are shown next to kitchen equipment and more of the men are shown with computer keyboards and mice or tennis rackets and snowboards. Train the system on COCO and it associates men with computer hardware more strongly than the statistics in the original photos.

One machine learning system can also add bias to another. Train a machine learning system with popular frameworks for representing words as vectors that show the relationships between them and it will learn stereotypes like “man is to woman as computer programmer is to homemaker,” or doctor to nurse and boss to receptionist. If you use that system with one that translates between languages that have pronouns like he and she, such as English, to ones that have gender-neutral pronouns, such as Finnish or Turkish, “they are a doctor” turns into “he is a doctor” and “they are a nurse” turns into “she is a nurse.”

Getting similar recommendations on a shopping site is useful, but it’s problematic when it comes to sensitive areas and can produce a feedback loop; if you join a Facebook group opposed to vaccination, Facebook’s recommendation engine will suggest other groups focused on conspiracy theories or the belief that the Earth is flat.

It’s important to be aware of the issues of bias in machine learning. If you can’t remove bias in your training data set, use techniques like regularizing the gender associations between word pairs to reduce bias or adding unrelated items to recommendations to avoid the ‘filter bubble’.
Myth: Machine learning is only used for good

Machine learning powers anti-virus tools, looking at the behaviour of brand-new attacks to find them as soon as they’re launched. But equally, hackers are using machine learning to probe the defenses of anti-virus tools, as well as to craft targeted phishing attacks at scale by analyzing large amounts of public data or analyzing how successful previous phishing attempts were.
Myth: Machine learning will replace people

It’s common to fret that AI will take away jobs and it will certainly change what jobs we do and how we do them; machine learning systems improve efficiency and compliance and reduce costs. In the long run it will create new roles in the business as well as making some current positions obsolete. But many of the tasks machine learning automates simply weren’t possible before, either because of complexity or scale; you couldn’t hire enough people to look at every photograph posted to social media to see whether it features your brand, for example.

What machine learning has already started doing is creating new business opportunities, such as improving customer experience with predictive maintenance, and offering suggestions and support to business decision makers. As with previous generations of automation, machine learning can free employees up to use their expertise and creativity.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |