TechNews Pictorial PriceGrabber Video Thu Apr 25 08:49:35 2024

0


The Chatbot Will See You Now
Source: Nick Romeo


Could artificial intelligence help address the mental-health crisis among Syrian refugees?Illustration by Eleanor Taylor

In March of 2016, a twenty-seven-year-old Syrian refugee named Rakan Ghebar began discussing his mental health with a counsellor. Ghebar, who has lived in Beirut since 2014, lost a number of family members to the civil war in Syria and struggles with persistent nervous anxiety. Before he fled his native country, he studied English literature at Damascus University; now, in Lebanon, he works as the vice-principal at a school for displaced Syrian children, many of whom suffer from the same difficulties as he does. When Ghebar asked the counsellor for advice, he was told to try to focus intently on the present. By devoting all of his energy to whatever he was doing, the counsellor said, no matter how trivial, he could learn to direct his attention away from his fears and worries. Although Ghebar sometimes found the instruction hard to follow, it helped him, and he shared it with his students.

The counsellor that advised Ghebar was called Karim—a psychotherapy chatbot designed by X2AI, an artificial-intelligence startup in Silicon Valley. The company was launched in 2014 by Michiel Rauws and Eugene Bann, an idealistic pair of young immigrant programmers who met in a San Francisco hacker house—a kind of co-op for aspiring tech entrepreneurs—and found that they shared an interest in improving access to mental-health services. For Rauws, in particular, the mission is somewhat personal. He suffers from several chronic health issues, and manages them by trying to keep his stress levels in check. After seeing a therapist for a few months, Rauws noticed that the conversations he was having were often formulaic: they followed a limited number of templates and paths. This suggested the possibility of automation. Bann, whose background is in computer science, was already writing emotion-recognition algorithms when he met Rauws. They soon joined forces to start X2AI.

The company’s creation coincided with a torrent of dismaying news out of Syria. That year, the Eastern Mediterranean Public Health Network, with funding from the World Health Organization and the International Medical Corps, published a study showing that nearly half of the Syrian refugees living in the Za’atari camp, in Jordan, reported feeling so hopeless that they couldn’t function. Almost three-quarters of the Syrian refugees in the country disclosed one or more “problematic behaviors,” such as excessive nervousness, continuous crying, self-isolation, and trouble sleeping. Only thirteen per cent of those at Za’atari stated that they had received any sort of mental-health care since arriving in Jordan. Addressing these needs in the traditional way, by deploying thousands of Arabic-speaking therapists into conflict zones, would have been impossible. But A.I. counsellors need no plane tickets, food, protection, or salaries. They can easily handle caseloads in the tens of thousands, and they can be available at any time, via text message, to anyone with a mobile phone.

It was clear to Rauws and Bann that their chatbot had certain advantages over a human therapist. But could they gain users’ trust and actually deliver high-quality treatment? This past March, the two men travelled to three sites in and around Beirut to find out, bringing along colleagues from the Field Innovation Team, a high-tech disaster-recovery nonprofit, and the Silicon Valley tech incubator Singularity University. They tested Karim on a group of about sixty Syrians—mostly men and boys, ranging widely in age. Many expressed a general skepticism about the security of interacting over text message, assuming that their counselling sessions would be monitored by governments or terrorist organizations. (X2AI set up a secure network to protect the exchanges.) In addition, the nature of the chatbot was difficult to convey: no precise translation exists in Arabic, and many of the Syrian subjects initially had trouble understanding that they were not interacting with a person. But the realization that the A.I. was not human was also freeing, a way of avoiding the strong social stigma that, according to some of the younger subjects, sometimes surrounds discussions of anxiety and sadness in their communities. Indeed, Ghebar noticed that, although he and Karim often gave his teen-age students exactly the same advice, the kids were more likely to listen to the A.I. “They know that I’m Syrian,” Ghebar told me. “So they think I have the same fears and I’m just consoling myself and them.”

There were other cultural kinks to work out. Karim was initially named Aziz, which means “dear” or “beloved” in Arabic. Not long after the X2AI team arrived in Lebanon, though, they discovered that the moniker had a major drawback. One day, Rauws described Aziz to a young Syrian man and asked whether he thought the A.I. could help his fellow-refugees. The man’s answer—an astonished no—was delivered in a tone that suggested Rauws was the one with a pressing need for psychological help. Suspecting something was amiss, Rauws typed “Aziz” on his phone and showed him the word. The man smiled: when Rauws pronounced “Aziz,” it sounded like he was saying “ISIS.” Aziz was quickly rechristened Karim, a name with no disturbing phonetic doppelgängers, and relaunched. After some users complained that the small cartoon face accompanying Karim’s texts appeared more Kuwaiti than Syrian, Karim lost his headscarf and began sporting a triangular goatee. He also expanded his vocabulary of Arabic slang and abbreviations.

Karim is only one member of X2AI’s polylingual family of chatbots. Others include Emma, a Dutch-language bot designed to help people with mild anxiety and fear; Nema, an English-language bot that specializes in pediatric diabetes care; and Tess, a highly adaptable English-language bot that can perform cognitive-behavioral therapy, motivational interviewing, and various other techniques. This portfolio reflects the company’s mixture of altruism and pragmatism: some of the bots are meant to satisfy the needs of a particular client (X2AI makes its money working with several large American and European health-care providers), while others aim to address the needs of a crisis area. Lebanon’s Ministry of Public Health and the United Nations World Food Program have both expressed interest in running larger pilot programs using Karim. Other bots are currently being developed to help people affected by gang violence in Brazil and H.I.V. in Nigeria. The company is establishing a nonprofit arm to manage its philanthropic programs.

X2AI describes its bots as therapeutic assistants, which means that they offer help and support rather than treatment. The distinction matters both legally and ethically. “If you make a claim that you’re treating people, then you’re practicing medicine,” Rauws told me recently. “There’s a lot more evidence required before you can make that claim confidently.” As a result, the A.I.s have human minders—typically employees of the health-care company that licensed the bots, not of X2AI itself—who can “ghost in” at will, assuming manual control over conversations. Any clear indication of self-harm or intent to harm others prompts human intervention, but there are disconcerting realms of ambiguity. Phrases such as “cut myself” or “I’ve had enough” are harmless if they refer to paper cuts or birthday cake, but not if they refer to people. The bots are designed to evaluate such statements in the broader context of a user’s personality and history: Is the person typically sarcastic, isolated, prone to outbursts?

Where human therapists rely on body language and vocal tone to make inferences about a patient’s mood, the X2AI bots detect patterns in how phrasing, diction, typing speed, sentence length, grammatical voice (active versus passive), and other parameters correlate with different emotional states. In principle, this imbues the system with the capacity to notice latent emotions, just as human therapists do. “When you say something in a certain way, a good friend will know how you actually feel,” Bann said. “It’s the same thing with our A.I.s.” Although he and Rauws declined to describe exactly how their bots’ core algorithms work, he did say that they rely on both manual coding of emotions and self-directed learning. X2AI psychologists script the conversational flows—the abstract schemas that the bots follow—but algorithms generate the wording of statements and detect user-specific emotional patterns. The system is essentially modular, so that new treatment paradigms and different languages can easily be added. Bann claimed, for instance, that the company could create a chatbot capable of performing Freudian dream analysis “in a week or two.”

Good data about the efficacy of A.I. therapists is scarce, since the technology is so young. But David Spiegel, a professor of psychiatry at Stanford who is unaffiliated with X2AI, told me that he found their diagnostic capacities promising. With perfect recall of every past patient interaction and the ability to integrate dozens of seemingly disparate criteria into a diagnosis, he noted, “a computer could potentially come up with a much more specific delineation of a problem.” Still, Spiegel could not imagine a bot ever offering patients the crucial therapeutic experience of feeling that someone else, despite knowing your flaws and vulnerabilities, cares about you. He was also skeptical that A.I.s could ever elicit the phenomenon of transference, which occurs when patients redirect feelings from a past relationship toward a therapist—often considered an important part of treatment. “There are aspects of psychotherapy that may always be beyond the reach of computers,” Spiegel said. “I’m not worried about being put out of business.”

Recently, I had a conversation with Sara, an X2AI chatbot designed to help people in their teens and twenties cope with loneliness. Bann and Rauws urged me to invent some frustrations and ailments to test on her, so after a few preliminaries I expressed some hyperbolic self-doubts about my likability, intelligence, and body image, and claimed that these flaws resulted in a lack of friends. “Oh,” Sara said, “that’s not very pleasant.” She suggested that I try volunteering somewhere. When I objected, she pushed back: “We can never be sure if something works until we try.”


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |