ChatGPT Health promises to personalise health information. It comes with many risks

Image: Berke Citak / Unsplash

Many of us already use generative artificial intelligence (AI) tools such as ChatGPT for health advice. They give quick, confident and personalised answers, and the experience can feel more private than speaking to a human.

Now, several AI companies have unveiled dedicated “health and wellness” tools. The most prominent is ChatGPT Health, launched by OpenAI earlier this month.

ChatGPT Health promises to generate more personalised answers, by allowing users to link medical records and wellness apps, upload diagnostic imaging and interpret test results.

But how does it really work? And is it safe?

Most of what we know about this new tool comes from the company that launched it, and questions remain about how ChatGPT Health would work in Australia. Currently, users in Australia can sign up for a waitlist to request access.

Let’s take a look.

AI health advice is booming

Data from 2024 shows 46% of Australians had recently used an AI tool.

Health queries are popular. According to OpenAI, one in four regular ChatGPT users worldwide submit a health-related prompt each week.

Our 2024 study estimated almost one in ten Australians had asked ChatGPT a health query in the previous six months.

This was more common for groups that face challenges finding accessible health information, including:

  • people born in a non-English speaking country
  • those who spoke another language at home
  • people with limited health literacy.

Among those who hadn’t recently used ChatGPT for health, 39% were considering using it soon.

How accurate is the advice?

Independent research consistently shows generative AI tools do sometimes give unsafe health advice, even when they have access to a medical record.

There are several high-profile examples of AI tools giving unsafe health advice, including when ChatGPT allegedly encouraged suicidal thoughts.

Recently, Google removed several AI Overviews on health topics – summaries which appear at the top of search results – after a Guardian investigation found the advice about blood tests results was dangerous and misleading.

This was just one health prompt they studied. There could be much more advice the AI is getting wrong we don’t know about yet.

So, what’s new about ChatGPT Health?

The AI tool has several new features aimed to personalise its answers.

According to OpenAI, users will be able to connect their ChatGPT Health account with medical records and smartphone apps such as MyFitnessPal. This would allow the tool to use personal data about diagnoses, blood tests, and monitoring, as well as relevant context from the user’s general ChatGPT conversations.

OpenAI emphasises information doesn’t flow the other way: conversations in ChatGPT Health are kept separate from general ChatGPT, with stronger security and privacy. The company also says ChatGPT Health data won’t be used to train foundation models.

OpenAI says it has worked with more than 260 clinicians in 60 countries (including Australia), to give feedback on and improve the quality of ChatGPT Health outputs.

In theory, all of this means ChatGPT Health could give more personalised answers compared to general ChatGPT, with greater privacy.

But are there still risks?

Yes. OpenAI openly states ChatGPT Health is not designed to replace medical care and is not intended for diagnosis or treatment.

It can still make mistakes. Even if ChatGPT Health has access to your health data, there is very little information about how accurate and safe the tool is, and how well it has summarised the sources it has used.

The tool has not been independently tested. It’s also unclear whether ChatGPT Health would be considered a medical device and regulated as one in Australia.

The tool’s responses may not reflect Australian clinical guidelines, our health systems and services, and may not meet the needs of our priority populations. These include First Nations people, those from culturally and linguistically diverse backgrounds, people with disability and chronic conditions, and older adults.

We don’t know yet if ChatGPT Health will meet data privacy and security standards we typically expect for medical records in Australia.

Currently, many Australians’ medical records are incomplete due to patchy uptake of MyHealthRecord, meaning even if you upload your medical record, the AI may not have the full picture of your medical history.

For now, OpenAI says medical record and some app integrations are only available in the United States.

So, what’s the best way to use ChatGPT for health questions?

In our research, we have worked with community members to create short educational materials that help people think about the risks that come with relying on AI for health advice, and to consider other options.

Higher risk

Health questions that would usually require clinical expertise to answer carry more risk of serious consequences. This could include:

Symptom Checker, operated by healthdirect, is another publicly funded, evidence-based tool that will help you understand your next steps and connect you with local services.

For now, we need clear, reliable, independent, and publicly available information about how well the current tools work and the limits of what they can do. This information must be kept up-to-date as the tools evolve.

Julie Ayre, Post Doctoral Research Fellow, Sydney Health Literacy Lab, University of SydneyAdam Dunn, Professor of Biomedical Informatics, University of Sydney, and Kirsten McCaffery, NHMRC Principal Research Fellow, Sydney School of Health, University of Sydney


YouTube Is Testing A New Means To Prevent Ad Skipping And It Involves Hiding The ‘Skip’ Button

 Video streaming giant YouTube is working on a new way to ensure users see ads and don’t skip them. This includes hiding the ‘Skip’ button option.


The free viewing experience that users have been benefiting from, thanks to the Skip button, could soon be disguised from user view. But if you’ve really got a problem with ads, the app is giving you the chance to pay extra and subscribe to Premium.

Google says ad monetization is one of the chief ways it makes money and helps creators get their rightful share. Hence, features like ad blockers have already made it hard for websites on the search engine to benefit and now the skip button is also hindering its monetization potential.


We can see this as a smart tactic to push marketing of its premium tier. Remember, so many viewers are complaining about how ad segments keep getting longer by the day. And if you can’t skip them, it just adds to a not-so-pleasurable experience.

The app has been very vocal about the ongoing debate against freeloaders who don’t want Premium and also don’t want to watch ads that help the company support its free services including streaming.

Google shared its dilemma on many platforms about how users are getting smarter with new extensions on browsers and ad blockers to hinder ads. This is why it might be forced to take on such drastic measures after all.


Another experiment had the app injecting ads inside video streams to stop any kind of blocking feature. And now this is coming into play as its next move to promote ads. It’s all thanks to one Reddit user who says they caught the app carrying out a test that hides the Skip button that users see once, after which you cannot skip the ad.

The placement is very smart as it’s out of the users’ reach and blocked by on-screen stickers. You can liken this to the sticker disguising skipping tips on POS machines.

As per the comments seen on that person’s Reddit thread, others were also confirming that they were seeing this take place and how it might be launched much sooner than expected. Who knows, this might be a limited test as the negative feedback is growing on the subject. And if you know the Android maker, it really does take users’ feedback into consideration.


It’s all going to remain a mystery for some time but we’ll keep you updated. Whatever the case may be, the thought of hiding the skip button is really extreme and it seems like Google is running out of options to hinder skipping ads.

What can be possibly worse is the app opting to delete the skipping ads button as a whole. Yikes, that would be a nightmare we feel. Do you agree?

Image: BigBlueMountainStar /Reddit

Shareholders Push for Apple to Open Up About AI Use

 At Apple's yearly meeting, big shareholders have a chance to suggest changes. This year, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) wants Apple to share how it uses AI and its rules for using this technology safely.


Norges Bank Investment Management and Legal & General, two of Apple's biggest shareholders, are backing this idea. Norges Bank says Apple should think about how its work and products affect society. Legal & General talked to Apple about being more open with its AI plans but didn't get the details they wanted. They think Apple should be clear about how it uses AI and how it manages any risks.

A big advisory group, Institutional Shareholder Services, is telling Apple's investors to support this AI proposal. They believe Apple's current rules don't fully cover the risks of using AI. This makes it hard for shareholders to understand the dangers.


Apple, however, doesn't want this proposal to pass. The company says the report being asked for is too broad and could make them share secrets that could hurt their competitive edge. In the U.S., even if shareholders like an idea, it doesn't force the company to do it. But if more than 30% of investors like an idea, it usually makes the company think seriously about it.
Everyone is watching to see if Apple will introduce new AI features at its WWDC event this year. This push for more openness about AI use at Apple is part of a bigger conversation on how big tech companies handle new technologies and their impact on society.

Image: Digital Information World - AIgen

Apple Agrees Its CSAM Scanning Initiative For Checking Child Abuse Materials Could Be Misused In A Shock Turn Of Events

 iPhone maker Apple has just made a shocking turn of events by confirming how its approach to tackling explicit and child abuse materials, better known as CSAM scanning, could be abused.


The company highlights how repressive governments might use it for scanning ordeals such as plans related to political protests.

The Cupertino firm rejected any reasoning provided during that period. But one ironic twist did take center stage after a reply was put forward to the government of Australia recently.

Apple mentioned how it had planned to roll out on-device scan plans with the help of techniques used for digital fingerprinting.

Those fingerprints are a means to match certain pictures without any individual getting the chance to view them. They’re designed to be used in a very fuzzy manner to match certain pictures that were cropped or edited while giving rise to a small number of false positives.

To be a little more clear in this regard, Apple confirmed how its recent proposal was an approach designed to respect users’ privacy because scanning could be carried out by certain devices. This way, no one would see any of the images until several matches had been flagged in this regard.

The issue is more linked to repressive governments than anyone else, as confirmed by Apple. This was bound to be an issue in the future as it had great potential of being abused by a long list of such governments.

Meanwhile, digital fingerprints could be produced for any kind of material, not only CSAM. So far, there is no plan in place to prevent authoritarian governments from including more to-the-picture databases featuring all kinds of political-themed content.

Some tools were rolled out that targeted serious suspects that were forced to be adapted to highlight any that showed opposition to the government or any of its policies. In such cases, Apple would be finding itself helping repression or in some worse scenarios, worsening the already chaotic political crisis in which hundreds of activists are involved.

Apple says it would never allow this. But such promises were predicated on the iPhone maker who had the legal freedom to say no and that would not be the case. In places such as China, the Cupertino firm was forced on a legal basis to get rid of VPNs, news, or any other platforms. They would store the information on iCloud belonging to Chinese citizens across servers that were under the ownership of a firm controlled by the government.

When you look at reality, there was just no way that the tech giant could fulfill such promises about it not being in compliance with necessary requirements for processing databases supplied by the government including CSAM pictures.

This included matches for substance under the use of both critics and those protesting against such schemes. Clearly, it’s a serious U-turn of events that not many saw coming for obvious reasons.

Scanning certain content paves the way to carrying out surveillance on a larger scale. It would create the desire to look for other kinds of encrypted systems designed for generating messages through various content types.

Now, we’re seeing Apple enter quietly in the limelight in this regard and speak on the Australian government’s clauses of forcing tech firms to carry out scans for the CSAM so as one can imagine, it’s a slippery slope.

Apple fears surveillance tools like these could be modified to look for other kinds of content including an individual’s political, religious, and even reproductive activity.

Apple's plan for CSAM scanning draws criticism amid fears of government misuse for political surveillance.

Stability AI Launches Stable Diffusion 3 to Lead in AI-Generated Images

 Stability AI has just announced Stable Diffusion 3, its newest and most advanced AI for making images. This release seems to be a move to stay ahead of new AI technologies from OpenAI and Google. While we're still waiting for more details, it's clear that Stable Diffusion 3, or SD3, introduces a fresh setup and aims to work well on many types of computers, although you'll need a strong one.


SD3 is built on an updated method called "diffusion transformer." This idea started in 2022, got an update in 2023, and is now ready to be used more widely. It shares some concepts with Sora, OpenAI's video-making tool. SD3 also uses "flow matching," a new way to make better-quality images without making the system too heavy.

The new version comes in different sizes, from smaller setups with 800 million parts to huge ones with 8 billion parts. This range means SD3 can run on various computers. Unlike AI tools from OpenAI and Google, you don't need to use an online service to access SD3.

Emad Mostaque, who leads Stable Diffusion, said on X that SD3 can understand and create videos too, which is something other big AI companies are focusing on with their online services. These features are still in the planning stage, but it looks like there won't be technical problems adding them later.

It's hard to say which AI model is best because none are fully out yet. However, Stable Diffusion is already popular for making all kinds of images with fewer restrictions on how or what you can create. With the launch of SD3, it's likely to start a new wave of AI-made content, once they figure out how to keep it safe.


Meta’s Oversight Board Expands To Include Threads

 Meta’s Oversight Board has just announced an expansion plan to include Threads in its purview.


The announcement was recently made about how Threads users would be given the chance to make appeals against Facebook’s parent firm’s decisions having to do with moderating content online. This would provide this independent group the chance to impact policies for the new app that comes under Meta’s ownership.

It’s a serious expansion for Meta’s Oversight Board. The latter has now weighed up against any issues linked to Facebook and Instagram’s content published online. This change would give users the chance to have greater accountability that was independent, early on.

As per statements mentioned by the Oversight Board, users making appeals across Threads are going to see it work very much like how it is on Meta’s other two apps, Instagram and Facebook.

After exhausting the whole internal system on Meta, they can see a small glimmer of hope through reviews requested by Meta’s Oversight Board.

Under such rules created during the board’s formation, Meta would now be forced to make the board’s decisions come into effect with particular posts. But at the same time, it should not feel pressured or obligated to stick to such policy recommendations.

Including content moderation of Threads under its belt, such decisions are clear proof of how there’s a growing influence of the platform that works very much like Twitter, despite being rolled out last year in the summer.

So far, Mark Zuckerberg’s platform has a user base featuring close to 130 million users and Zuckerberg speculated some more ambitious targets where it might just go up to one billion soon. But only time can tell how true his prediction proves to be.

When we take a closer look into this matter, Threads has similar rules to Instagram. Meta has similarly encountered such pushback from a wide number of users working over the policies that include content recommendations.

For now, we’re seeing Threads bar certain terms linked to the COVID-19 pandemic and issues of that kind which it deemed to be sensitive. As one would expect, it raised eyebrows.

It similarly shocked some critics when it delineated how accounts promoting political content would no longer be recommended to users on the feed. The only way that would be possible is if the users themselves opted into making such kinds of suggestions.

When you look at things more officially, Threads has rules quite similar to Instagram. However, Meta says it continues to face a lot of pushback from plenty of users in regards to its policies outlined for content recommendations.

Whether or not the board weighs in on such decisions, only time can tell. It’s going to take some time before we see users on Threads witness any kind of changes due to recommendations made by the board.

Meta’s Oversight Board only attains a small figure of appeals generated by users. It might take a few weeks or even months for such groups to produce a decision and it may take even more months for the tech giant to make amendments to any regulations that it already has in place, thanks to the guidance being taken on.

Remember, Meta’s Oversight Board can expedite the whole process in certain cases so gone are the days when one would expect the whole endeavor to take a really long time before coming into play.

For now, we think the Oversight Board of Meta has its hands full because an app like Threads is certainly not the simplest to handle, especially in terms of content regulation.

Threads users now have the opportunity to appeal Meta's content moderation decisions through the Oversight Board.