Showing posts with label health. Show all posts
Showing posts with label health. Show all posts

ChatGPT Health promises to personalise health information. It comes with many risks

Image: Berke Citak / Unsplash

Many of us already use generative artificial intelligence (AI) tools such as ChatGPT for health advice. They give quick, confident and personalised answers, and the experience can feel more private than speaking to a human.

Now, several AI companies have unveiled dedicated “health and wellness” tools. The most prominent is ChatGPT Health, launched by OpenAI earlier this month.

ChatGPT Health promises to generate more personalised answers, by allowing users to link medical records and wellness apps, upload diagnostic imaging and interpret test results.

But how does it really work? And is it safe?

Most of what we know about this new tool comes from the company that launched it, and questions remain about how ChatGPT Health would work in Australia. Currently, users in Australia can sign up for a waitlist to request access.

Let’s take a look.

AI health advice is booming

Data from 2024 shows 46% of Australians had recently used an AI tool.

Health queries are popular. According to OpenAI, one in four regular ChatGPT users worldwide submit a health-related prompt each week.

Our 2024 study estimated almost one in ten Australians had asked ChatGPT a health query in the previous six months.

This was more common for groups that face challenges finding accessible health information, including:

  • people born in a non-English speaking country
  • those who spoke another language at home
  • people with limited health literacy.

Among those who hadn’t recently used ChatGPT for health, 39% were considering using it soon.

How accurate is the advice?

Independent research consistently shows generative AI tools do sometimes give unsafe health advice, even when they have access to a medical record.

There are several high-profile examples of AI tools giving unsafe health advice, including when ChatGPT allegedly encouraged suicidal thoughts.

Recently, Google removed several AI Overviews on health topics – summaries which appear at the top of search results – after a Guardian investigation found the advice about blood tests results was dangerous and misleading.

This was just one health prompt they studied. There could be much more advice the AI is getting wrong we don’t know about yet.

So, what’s new about ChatGPT Health?

The AI tool has several new features aimed to personalise its answers.

According to OpenAI, users will be able to connect their ChatGPT Health account with medical records and smartphone apps such as MyFitnessPal. This would allow the tool to use personal data about diagnoses, blood tests, and monitoring, as well as relevant context from the user’s general ChatGPT conversations.

OpenAI emphasises information doesn’t flow the other way: conversations in ChatGPT Health are kept separate from general ChatGPT, with stronger security and privacy. The company also says ChatGPT Health data won’t be used to train foundation models.

OpenAI says it has worked with more than 260 clinicians in 60 countries (including Australia), to give feedback on and improve the quality of ChatGPT Health outputs.

In theory, all of this means ChatGPT Health could give more personalised answers compared to general ChatGPT, with greater privacy.

But are there still risks?

Yes. OpenAI openly states ChatGPT Health is not designed to replace medical care and is not intended for diagnosis or treatment.

It can still make mistakes. Even if ChatGPT Health has access to your health data, there is very little information about how accurate and safe the tool is, and how well it has summarised the sources it has used.

The tool has not been independently tested. It’s also unclear whether ChatGPT Health would be considered a medical device and regulated as one in Australia.

The tool’s responses may not reflect Australian clinical guidelines, our health systems and services, and may not meet the needs of our priority populations. These include First Nations people, those from culturally and linguistically diverse backgrounds, people with disability and chronic conditions, and older adults.

We don’t know yet if ChatGPT Health will meet data privacy and security standards we typically expect for medical records in Australia.

Currently, many Australians’ medical records are incomplete due to patchy uptake of MyHealthRecord, meaning even if you upload your medical record, the AI may not have the full picture of your medical history.

For now, OpenAI says medical record and some app integrations are only available in the United States.

So, what’s the best way to use ChatGPT for health questions?

In our research, we have worked with community members to create short educational materials that help people think about the risks that come with relying on AI for health advice, and to consider other options.

Higher risk

Health questions that would usually require clinical expertise to answer carry more risk of serious consequences. This could include:

Symptom Checker, operated by healthdirect, is another publicly funded, evidence-based tool that will help you understand your next steps and connect you with local services.

For now, we need clear, reliable, independent, and publicly available information about how well the current tools work and the limits of what they can do. This information must be kept up-to-date as the tools evolve.

Julie Ayre, Post Doctoral Research Fellow, Sydney Health Literacy Lab, University of SydneyAdam Dunn, Professor of Biomedical Informatics, University of Sydney, and Kirsten McCaffery, NHMRC Principal Research Fellow, Sydney School of Health, University of Sydney


Your Smartphone Might Be Giving You ADHD, Here’s What You Need to Know

 The notion that children can somehow stop experiencing the symptoms of ADHD as they get on in years has been thoroughly debunked by science, but in spite of the fact that this is the case, there is a chance that smartphones might actually be giving adults ADHD without them even realizing it. This type of disorder usually originates in children under the age of 12, but it seems as though smartphones are causing adults to develop the symptoms related to it as well with all things having been considered and taken into account.


It bears mentioning that the symptoms can look rather different in adults than they do for children, with anger management issues, excessive restlessness, low self esteem, trouble with relationships, a lack of time management skills and many others factoring into the mix. With all of that having been said and now out of the way, it is important to note that the proportion of adults with ADHD was around 6.3% in 2020, which is a significant uptick from the 4.4% that had been diagnosed with it in 2003.

According to research that was published in the journal known as Journal of the American Medical Association, the use of digital media can actually increase the likelihood of an ADHD diagnosis by as much as 10%. What's more is that adults are often required to multitask at this current point in time, which can be harmful because of the fact that this is the sort of thing that could potentially end up pulling their minds in too many different directions at once.
There is now considerable clinical evidence pointing to the reality that using too much technology can lead to ADHD symptoms down the line. Of course, there is also the chance that they might be caused by hormonal changes or a wide range of other unrelated circumstances, but the connection between ADHD and digital media consumption as well as smartphone usage can’t be ignored. It will be interesting to see where things go from here on out, since the findings presented in this research point to something extremely pertinent to modern life.

Photo: Digital Information World - AIgen

Use of Screens for More than 7 Hours a Day Harms the Physical as Well as Mental Health of Individuals

 A report presented by the American Optometric Association (AOA) shows that more than 7 hours of screen time is very harmful to our health and it costs about $73 billion per year to the US economy. Avoiding the use of digital devices and screens is very tough because most of our lives revolve around them nowadays. But, it has many harmful effects on our health too. The report found out that using screens more than 7 hours a day can result in myopia which is also known as nearsightedness. There can also be issues with digital eye strain (DES) or computer vision syndrome (CVS). The symptoms and effects of these health conditions include dry eyes, blurry vision, headaches, and back and neck pain.


Screen Overuse: AOA Report Reveals $73 Billion Annual Cost and Health Risks
Photo: Digital Information World - AIgen

If people don't take DES seriously, it can also result in a decrease in their productivity, dealing with some other eye problems, and harmful consequences on sleep schedule, and mental health. Some of its other effects can include over-presence in the workplace or being absent a lot during work hours. When an individual suffers from these conditions, he will have to frequently visit his health providers too and that will result in spending a lot of money. Overall, the quality of life will see a huge decline.

The solution to all these problems isn't only reducing your screen time and be mindful of those devices, especially among people whose jobs depend on spending time in front of screens. 70% of the individuals who do office jobs need to spend their time using screens as compared to 42% of the people who have some other professions. The best way to keep yourself safe from DES is by looking for its symptoms and regular eye checkups. Also, wear some eyewear recommended by an eye specialist while working in front of a screen. You can also follow a 20-20-20 rule where with every 20 minutes on screen, you have to look at something 20 feet away from you for 20 seconds. Don't forget to manage and fix your posture too by using a chair and table suitable for your height. Taking breaks between working on screens is also very beneficial for your physical as well as mental health.

Meta Unveils New ‘Nighttime Nudges’ For Instagram So Teens Log Off During Late Hours

 It’s no surprise that tech giant Meta has been scrutinized in the past and continues to face major criticism for its apps in terms of the safety they offer to minors.


Plenty of research and bombshell documents have called out Facebook’s parent firm and how it needs to be doing more than rolling out a limited set of tools to combat the issue.

Keeping this in mind, Meta has just unveiled a new feature for Instagram users that prevents teens from using the app excessively at late-night timings.
The rollout comprises a feature called nudges where teens would be prompted to log off when they hit curfew hours. And what better source of reminder or alert than this, right?

The alert will be displayed whenever a minor uses the app for more than ten minutes in a particular area like Instagram Reels or Instagram DMs during odd times at night.

The company mentioned through a newsroom post how the goal was to generate a tool because nothing is more important than sleep, especially when it comes to the younger generation.


Image: Meta


The alerts are rolled out during the night hours and present as the newest offering by Meta’s parental control suite. So we first heard about this last year in June and how it was rolling out such features to assist parents in terms of managing kids’ screen times as well as the activity carried out on the apps. Moreover, the firm has also been seen slowly rolling out a host of other tools in the past couple of months.

The tech giant mentioned last week how the goal was to keep better tabs on what was being shown to minors on Facebook as well as the Instagram platform. Such settings ensure there will be a decreased level of sensitive content that children are exposed to through the platform.


The leading tech giant has been facing a lot of criticism in the recent past over how its offerings keep affecting the minds of youngsters. It was slammed with all sorts of lawsuits from a host of state attorneys in general and the overall lack of safeguards in place for youngsters.

Meanwhile, one bipartisan group featuring 33 attorney generals mentioned how the goal was to sue the firm when it proved that the company was targeting minors by marketing addictive offerings. The lawsuit arising in December spoke about New Mexico springing into action when its regulators showed how the apps were breeding grounds for abuse of young minds and that was totally unacceptable.

In the past, Meta has tried long and hard in terms of showing how hard it has worked to curb such issues through the launch of nearly 30 tools and a host of different resources to provide support for both children and parents. The goal was to make sure only safe experiences were taking place on these apps.