14.9 C
Athens
Παρασκευή, 19 Απριλίου, 2024

From ChatGPT to DALL-E: How an AI might take your job, make your medical decisions and decide your bail

Ειδήσεις Ελλάδα

Sign up to our Evening Headlines email for your daily guide to the latest newsSign up to our free US Evening Headlines email

“Could AI programs become sentient?” It’s a familiar question. The human race has been asking it since before HAL from A Space Odyssey appeared on our screens — but when you have the opportunity to interview one of the world’s leading experts on AI, you have to ask it anew.

I’m expecting a roll of the eyes; a weary “Of course not.” But instead, Professor Vincent Conitzer —  a computer science professor at Carnegie Mellon University and Oxford University with a focus on AI and machine learning, and the head of Technical AI Engagement at the Institute for Ethics in AI — stops to consider.

“These systems are very clearly getting much better very quickly,” he tells me. “Maybe actually, at some point, we do conclude that a system is conscious.”

A discussion about AI personhood isn’t what I’d first imagined when I started asking a chatbot to write bad White Lotus scripts, and (even worse) The 1975 songs. The groundbreaking ChatGPT tool was released by OpenAI — a non-profit research laboratory co-founded by Elon Musk in 2015 — earlier this month. It allows a user to ask a question, or give a prompt, to which the tool will respond in the style of a human. And it is extremely convincing.

After a few hours of making it do party tricks — like writing haikus in the style of Frasier Crane, or Sopranos scripts where Tony comes from Texas — genuine use cases quickly emerge.

I begin pasting draft emails and asking the tool to make them sound more professional. I ask for help creating Microsoft Excel scripts, and I ask it to brainstorm Christmas gift ideas for nieces and nephews.(For my 12-year-old nephew, it suggests Monopoly and a soccer ball. For a 10-year-old niece, it offers art supplies or movie theater tickets; for the nine-year-old, a science experiment kit; and for the six-year-old, a dress-up outfit and musical instrument.)

Jeremy Howard, an artificial intelligence research whose work inspired ChatGPT’s creation, reportedly told his daughter: “Don’t trust everything it gives you.” Specifically, he cautioned against taking ChatGPT too seriously or assuming it doesn’t have biases just because it isn’t a human. “It can make mistakes,” Howard said.

And it does. For mathematical questions, it missed prime numbers and failed basic arithmetic. When creating code, it makes the type of careless mistakes a caffeinated intern would generate. And it also thought the founder of Time magazine was the guy behind Birdseye frozen food.

Nonetheless, it seems more human than computer. If it gets caught out on a mistake it’ll apologise and try to explain why it made it; If you ask it a badly worded question it typically gathers the intent regardless; and there are flashes of creativity when given more open-ended writing tasks.

All of which raises some unsettling questions.

A recent Pew Research Center discussion saw experts weigh in on a variety of AI development concerns — including concerns that it is difficult to make an AI system that is truly ethical; that AI control is concentrated in the hands of powerful companies and governments; and that systems are so opaque that abuses are hard to spot and remedy.

John Harlow, smart cities research specialist at the Engagement Lab at Emerson College, said during that discussion: “AI will mostly be used in questionable ways in the next decade. Why? That’s how it’s been used thus far, and we aren’t training or embedding ethicists where AI is under development, so why would anything change?”

Will an AI take my job?

In my interview with Conitzer, I was initially motivated by self-interest. Specifically, having seen how well ChatGPT could write, I wanted to know: Is my job in journalism on the line? With a recession looming, plenty of my colleagues — whether seriously or in jest — have brought up the idea that advanced AI could scythe through payrolls like a hot knife through butter. This is, after all, a tool that doesn’t just pump out aggregated versions of existing news articles. It’s able to put together entire opinion pieces with one-sentence prompts. I ask it to write a newspaper editorial considering whether America’s political polarization will lead to civil war. Within seconds, it starts generating an article so coherent I couldn’t tell it apart from a lot of editorials on the internet. It weighs up the roiling political undercurrents with the checks and balances that have protected the nation for more than 200 years, before concluding: “Despite these factors, it is still possible that political polarization could lead to civil war if left unchecked.”

Microsoft laid off its MSN aggregation editors in 2020 to replace them with AI. Now the technology exists to go even further, is a lot of reporting toast? Indeed, is a huge chunk of the jobs market going the same way?

Conitzer isn’t too worried — yet — and sees AI as complementary to most people’s roles rather than a potential replacement: “Most jobs have multiple tasks. In some situations, we could automate some of the more repetitive tasks, which could mean you need fewer people to carry out a role than you needed before.”

One example is radiology. “Radiologists do lots of things, but let’s talk about the specific part of the role where they get an image of some sort and they try to classify whether it contains a malignant tumor, for example. That’s something that you could try to automate through AI, because you’re trying to match new images to existing patterns.”

Relationship-focused jobs with limited repetition should be largely untouched for the foreseeable future, he believes. He gives the example of a teacher at an elementary school as a role that’s likely to be around for a long time to come. Indeed, many jobs that now command high salaries and multiple years of higher education — software engineering, for example — are much more likely to be automated away than jobs like teaching, nursing and elder care which are currently less valued in the marketplace.

But even if your job is safe, AI could have a hand in selecting your future colleagues. And it’s a minefield.

AI software is able to scan through thousands of applications to find the 10 that are, on paper, the most likely candidate for the job. But to create such a system bears the risk of introducing bias from the start — because you have to show it examples of resumes from successful and unsuccessful candidates. That risks baking in a hiring manager’s prejudice to a company’s practices forever.

“You may not have realized it, but if one of the things you were looking at was the person’s race or gender, and that affected whether you decided to interview them, the system is likely to pick up on that and say, well, that’s apparently a feature I should care about, because that’s what the labels show me,” Conitzer says.

That’s a problem New York City is taking action to tackle now, with a first-of-its-kind law that would require any AI-led employment decision systems to be audited for bias.

From policing to parole officers

Even more controversial than the jobs market, AI involvement in the justice system is, deservedly, alarming to many. The idea that we could soon end up living in a society resembling Minority Report — the 2002 movie starring Tom Cruise, where police use psychic technology to arrest and convict murderers before they commit their crime — might be a long-shot. But we already see machine learning being used to decide whether a prisoner should be given parole, or a suspect given bail.

New Jersey adopted algorithmic risk assessment in 2014, cheered on by the Pretrial Justice Institute (PJI). But the PJI has since reversed course, saying it has actually increases racial disparities. Tenille Patterson, an executive partner, told Wired: “We saw in jurisdictions that use the tools and saw jail populations decrease that they were not able to see disparities decrease, and in some cases they saw disparities increase.”

When we get down to on-the-ground policing, however, there are a whole other set of issues to be explored. Conitzer mentions that AI can be used to generate police patrol routes based on crime statistics — which comes with its own problems: “If the only way did you find out that crimes are occurring is by sending police cars there in the first place, it could be that the historical data that you have overrepresent certain neighborhoods because that’s where you send out police cars to, and you could get into a vicious cycle of over-policing where we keep detecting crimes there because we keep sending out police cars there.”

When computers decide on your hospital treatment

And if AI doesn’t reach you through the long arm of the law, it may get you on your next trip to the hospital: “Maybe there are cases where you want to prioritize one patient over another,” says Conitzer. “And that leads to the question, ‘For what reason is it okay to prioritize them?’

“Should you prioritize somebody who is younger? Should you prioritize somebody who is otherwise less sick?”

Hospitals are already doing this using human expertise — but delegating such decision-making to software really ups the stakes. It has led experts like Conitzer to examine how people think, and whether an objective system can be created in the first place. Like many of his academic colleagues who started out in computer science, he now works half the time in the philosophy department.

In a 2022 paper, Conitzer and others examined which features of patients are, according to the British public, morally relevant in ventilator triage. The top three features that people said should count against ventilator access were having committed violent crimes in the past; having unnecessarily engaged in activities with a high risk of Co-19 infection; and having a low chance of survival.

“We didn’t mean to come up with the answers to those, but we wanted to come up with a process for how you would figure out what the objective function should be in such a way that you could actually implement it into an AI system,” Conitzer explains.

A paper in the Journal of Ethics warned that it could be “dehumanizing” to automate the process fully, although using AI to supplement the human decision-making process was regarded as acceptable.

Incorporating patients’ own values could also be key. Ethicist and Stanford Institute for Human-Centered AI fellow Kathleen Creel has said that AI processing could be tuned like radio to reflect a patient’s attitudes. Imagine a system wherein medical decisions are recommended by an AI-powered algorithm. Each patient could answer certain questions according to their indiual needs and beliefs, such as: “Would you rather risk surgical complications to treat a benign tumor than risk missing a cancerous tumor?” The system then becomes tailored to them. Someone who would rather undergo surgery every time will have invasive procedures recommended more often. Someone who has clearly told the system they prefer a more hands-off approach will have less invasive solutions to health problem prioritised.

“Patients deserve to have their values reflected in this debate and in the algorithms,” Creel says. “Adding a degree of patient advocacy would be a positive step in the evolution of AI in medical diagnostics.”

One might argue that implementing such systems in a medical environment is a good idea. Why, after all, should we rely on the judgment of a singular physician rather than the aggregated knowledge of thousands, delivering through an AI-led system?

Can an AI be an artist?

In other arenas, we are much less likely to concede to machines. Art is still seen as the purview of people, a quintessentially human escapade. Can art be meaningfully created by artificial intelligence?

This week, I asked the OpenAI image-generation service called DALL-E to create images of various towns and cities in which I’ve lived in the style of different artists. It painted South Shields in the style of Edward Hopper, Leicester through the eyes of Caravaggio, London in the style of Da Hockney, and New York City in the style of Rothko. It looked cool. I tweeted it.

These were imitations, however; clever imitations, but imitations nonetheless. It’s hard to imagine a world where original art, however beautiful, has more value when produced by an AI than when it’s produced by a person. Part of the enjoyment and the appreciation of a painting, surely, is the way in which we relate to the artist.

Conitzer foresees a different problem: accusations of plagiarism. If any style can be copied with the click of a button, can years of perfecting one’s own work — whether that’s painting, a comic book style, digital drawing or anime — be erased when an AI is able to learn and reproduce it with a single click? “You could also ask a human being to paint in somebody else’s style, right? I think that’s something that we generally consider okay, but there is kind of a question of: How much copying is [DALL-E] really doing? I think for now, our laws are not really very well set up for this.”

Steven Sacks’ bitforms gallery in New York City has an exhibit this month that exclusively features works created with or inspired by AI. He told CNN that AI is a “brilliant partner creatively”, and added, to The San Francisco Standard: “I think the whole essence of DALL-E is surprise. Everything is pretty extraordinary when you think about how it was created using just key phrases that come from the artist.”

But for every enthusiast like Sacks, there are a number of sceptics. Digital artists in particular have pushed back against the use of image-creation AI, arguing that their livelihoods are being threatened and, through algorithmic imitation, their work is effectively being stolen.

One digital artist told BuzzFeed News: “Artists dislike AI art because the programs are trained unethically using databases of art belonging to artists who have not given their consent. This is about theft.”

Conitzer doesn’t think that AI alone will necessarily begin to produce what we’d recognize as art but: “I definitely think that people together with AI would be able to generate art.” He adds that we’ve had discussions about technological aids to art for centuries: “Photography, there was probably an initial reaction to photography. You’re just clicking and just observing nature. That’s nature doing the work. That’s not you doing the work… And I think here it’s going to be similar.

“I think to some extent, it’s fairly obvious that you could really do art by being deliberate on how you use these systems and put something together.”

But then how much credit do you give the AI for its helping hand? “We don’t credit the camera together with the photographer,” Conitzer says, whereas “we could see an AI system given a little bit of credit.” Inevitably, we’re back to the sentience question.

Elon Musk is no longer on the board of OpenAI, and is on record as saying that his confidence in the company while he was there was “not high” when it came to safety. He recently admitted that ChatGPT is “scary good” but warned, “We are not far from dangerously strong AI.”

Conitzer suggests that it’s not up to any one person to decide how human an AI has become. He foresees those philosophical issues concerning personhood in the future: “Even if we AI researchers and maybe philosophers agree that these things are presumably not conscious, other people might just start to flat out disagree and say, well, look how it’s talking with me. I just cannot believe that I’m not talking to a conscious entity at this point.” In those kinds of discussions, he suggests, neither party will be clearly right or wrong. If we are to contribute meaningfully, we will have to become comfortable with living in the grey areas.

Ειδήσεις

ΠΗΓΗ

Σχετικά άρθρα

Θέσεις εργασίας - Βρείτε δουλειά & προσωπικό