Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

ChatGPT Health promises to personalise health information. It comes with many risks

Image: Berke Citak / Unsplash

Many of us already use generative artificial intelligence (AI) tools such as ChatGPT for health advice. They give quick, confident and personalised answers, and the experience can feel more private than speaking to a human.

Now, several AI companies have unveiled dedicated “health and wellness” tools. The most prominent is ChatGPT Health, launched by OpenAI earlier this month.

ChatGPT Health promises to generate more personalised answers, by allowing users to link medical records and wellness apps, upload diagnostic imaging and interpret test results.

But how does it really work? And is it safe?

Most of what we know about this new tool comes from the company that launched it, and questions remain about how ChatGPT Health would work in Australia. Currently, users in Australia can sign up for a waitlist to request access.

Let’s take a look.

AI health advice is booming

Data from 2024 shows 46% of Australians had recently used an AI tool.

Health queries are popular. According to OpenAI, one in four regular ChatGPT users worldwide submit a health-related prompt each week.

Our 2024 study estimated almost one in ten Australians had asked ChatGPT a health query in the previous six months.

This was more common for groups that face challenges finding accessible health information, including:

  • people born in a non-English speaking country
  • those who spoke another language at home
  • people with limited health literacy.

Among those who hadn’t recently used ChatGPT for health, 39% were considering using it soon.

How accurate is the advice?

Independent research consistently shows generative AI tools do sometimes give unsafe health advice, even when they have access to a medical record.

There are several high-profile examples of AI tools giving unsafe health advice, including when ChatGPT allegedly encouraged suicidal thoughts.

Recently, Google removed several AI Overviews on health topics – summaries which appear at the top of search results – after a Guardian investigation found the advice about blood tests results was dangerous and misleading.

This was just one health prompt they studied. There could be much more advice the AI is getting wrong we don’t know about yet.

So, what’s new about ChatGPT Health?

The AI tool has several new features aimed to personalise its answers.

According to OpenAI, users will be able to connect their ChatGPT Health account with medical records and smartphone apps such as MyFitnessPal. This would allow the tool to use personal data about diagnoses, blood tests, and monitoring, as well as relevant context from the user’s general ChatGPT conversations.

OpenAI emphasises information doesn’t flow the other way: conversations in ChatGPT Health are kept separate from general ChatGPT, with stronger security and privacy. The company also says ChatGPT Health data won’t be used to train foundation models.

OpenAI says it has worked with more than 260 clinicians in 60 countries (including Australia), to give feedback on and improve the quality of ChatGPT Health outputs.

In theory, all of this means ChatGPT Health could give more personalised answers compared to general ChatGPT, with greater privacy.

But are there still risks?

Yes. OpenAI openly states ChatGPT Health is not designed to replace medical care and is not intended for diagnosis or treatment.

It can still make mistakes. Even if ChatGPT Health has access to your health data, there is very little information about how accurate and safe the tool is, and how well it has summarised the sources it has used.

The tool has not been independently tested. It’s also unclear whether ChatGPT Health would be considered a medical device and regulated as one in Australia.

The tool’s responses may not reflect Australian clinical guidelines, our health systems and services, and may not meet the needs of our priority populations. These include First Nations people, those from culturally and linguistically diverse backgrounds, people with disability and chronic conditions, and older adults.

We don’t know yet if ChatGPT Health will meet data privacy and security standards we typically expect for medical records in Australia.

Currently, many Australians’ medical records are incomplete due to patchy uptake of MyHealthRecord, meaning even if you upload your medical record, the AI may not have the full picture of your medical history.

For now, OpenAI says medical record and some app integrations are only available in the United States.

So, what’s the best way to use ChatGPT for health questions?

In our research, we have worked with community members to create short educational materials that help people think about the risks that come with relying on AI for health advice, and to consider other options.

Higher risk

Health questions that would usually require clinical expertise to answer carry more risk of serious consequences. This could include:

Symptom Checker, operated by healthdirect, is another publicly funded, evidence-based tool that will help you understand your next steps and connect you with local services.

For now, we need clear, reliable, independent, and publicly available information about how well the current tools work and the limits of what they can do. This information must be kept up-to-date as the tools evolve.

Julie Ayre, Post Doctoral Research Fellow, Sydney Health Literacy Lab, University of SydneyAdam Dunn, Professor of Biomedical Informatics, University of Sydney, and Kirsten McCaffery, NHMRC Principal Research Fellow, Sydney School of Health, University of Sydney


ChatGPT’s Revenue Might Be On The Rise But Downloads Are Taking A Hit

 OpenAI’s ChatGPT tool has been the rage for a while now. The company just celebrated one year of success and the rollout on mobile devices in May brought about a massive on the App Store too, alongside Google Play.


So while the revenue continues to surge, the same cannot be said about downloads as they are stumbling downwards.

Yes, fewer people are installing the tool and that’s shocking considering how the company had to actually suspend growth and bar downloads for the premium subscription as they simply could not handle what was taking place.

According to AppFigures data, the month of November had the company barring signups but the suspension did not last a long time. While many thought the temporary fall in downloads would not last long, recent stats prove otherwise.

It’s just a matter of a few months, not! As per recent estimates coming from App Intelligence, the revenue for ChatGPT started to rise quite fast after subscriptions went back to normal. In October, ChatGPT earned a massive $5.6 million after fees were given to the App Store.

This net revenue rose to $6.7 million in November and that went further to $7.8 million in the month after that.

So while we do agree that OpenAI lost quite a few funds during the period deemed to be a pause, this meant ChatGPT’s premium tier continues to be popular. And not fairly popular, very popular as revenue in January rose massively.

This brought the tool to the forefront where it attained a massive revenue worth $11 million. So this is not what means after both Apple and Google got their respective fees and shares.

So while that’s the good news, let’s glance over the bad news. The installations for ChatGPT have continued to surge since it was rolled out in the past year. It grew steadily, month after month as plenty of competitors from the industry arose.

It took the company a whopping seven months to attain peak figures, which were close to 19 million in downloads, starting from November. This is the same month we saw subscriptions get paused, and also start to drop.

As per the estimates, ChatGPT was installed nearly 18 million more in December and just 15 million times last month. And to be more specific, that’s a 21% fall.

This fall might not have a serious effect on revenue as the company’s conversion rates remain high for those who feel they require it. But that also puts the revenue of the future growth under scrutiny. This is very true as plenty of rivals are arising in the industry, and big names like X, formerly Twitter, and Google pick up the pace and enter the race to reach the top.

OpenAI's ChatGPT faces declining downloads despite revenue surge, with premium tier popularity sustaining growth.

Despite revenue success, ChatGPT faces challenges as downloads decline, and industry rivals intensify competition.

News Startup Rolls Out AI Editorial Tool That Uses Microsoft Bing And ChatGPT To Generate Multi-Source News Feed

 Popular news site Semafor has just rolled out a powerful AI-based editorial tool dubbed Signals.


The tool is said to make use of AI-based research, attaining help from Microsoft’s Bing and ChatGPT. What users get in the end is a news feed from various sources around the globe, as mentioned by the startup on Monday.

Microsoft is providing full sponsorship for Signals and also rolling out a disguised amount in the name of funding, as mentioned by The Financial Times. It created the customized AI bot so that journalists can benefit from this OpenAI platform as well as Bing.

Meanwhile, Site Editors spoke about the findings arising from the chatbot and how it was able to generate all kinds of accurate answers, produce synopses, and also cite sources if need be.

Meanwhile, another post was found on X where the editor-in-chief spoke about the great efforts involved in bettering the digital media landscape that’s been broken for years.

They spoke about how AI tools are the future and it’s not used solely for the likes of generating articles. Instead, it would be used for assisting the entire research ordeal seen on branded stories that are found on Signals.
Semafor mentioned how AI technology is taken advantage of to reduce bias and mistrust through the likes of distillation from a series of global sources.

On this particular site, we saw stories about signals being produced using AI technology and how it’s labeled with Semafor Signals and attained support from software giant Microsoft.

There’s even one news section on the Signals that combines context for every news story from all sorts of sources across the board.

Signals pop up on the homepage for Semafor and that’s going to be included in the real newsletter as per this statement generated.

Semafor made the decision to embrace AI tools and it’s quite different from what’s seen on various other media outlets like The New York Times. The latter is the one who rolled out a lawsuit against the tech giant and ChatGPT maker in December for things like copyright infringement.

Semafor making use of AI is not going to be like making use of the tool to generate articles through the technology. Instead, it has to do with G/O Media which has various other sites under its ownership like Gizmodo and Kotaku who have put forward AI-produced content that was critiqued by its workers in the previous year.

Photo: DIW - AIgen