Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

ChatGPT Health promises to personalise health information. It comes with many risks

Image: Berke Citak / Unsplash

Many of us already use generative artificial intelligence (AI) tools such as ChatGPT for health advice. They give quick, confident and personalised answers, and the experience can feel more private than speaking to a human.

Now, several AI companies have unveiled dedicated “health and wellness” tools. The most prominent is ChatGPT Health, launched by OpenAI earlier this month.

ChatGPT Health promises to generate more personalised answers, by allowing users to link medical records and wellness apps, upload diagnostic imaging and interpret test results.

But how does it really work? And is it safe?

Most of what we know about this new tool comes from the company that launched it, and questions remain about how ChatGPT Health would work in Australia. Currently, users in Australia can sign up for a waitlist to request access.

Let’s take a look.

AI health advice is booming

Data from 2024 shows 46% of Australians had recently used an AI tool.

Health queries are popular. According to OpenAI, one in four regular ChatGPT users worldwide submit a health-related prompt each week.

Our 2024 study estimated almost one in ten Australians had asked ChatGPT a health query in the previous six months.

This was more common for groups that face challenges finding accessible health information, including:

  • people born in a non-English speaking country
  • those who spoke another language at home
  • people with limited health literacy.

Among those who hadn’t recently used ChatGPT for health, 39% were considering using it soon.

How accurate is the advice?

Independent research consistently shows generative AI tools do sometimes give unsafe health advice, even when they have access to a medical record.

There are several high-profile examples of AI tools giving unsafe health advice, including when ChatGPT allegedly encouraged suicidal thoughts.

Recently, Google removed several AI Overviews on health topics – summaries which appear at the top of search results – after a Guardian investigation found the advice about blood tests results was dangerous and misleading.

This was just one health prompt they studied. There could be much more advice the AI is getting wrong we don’t know about yet.

So, what’s new about ChatGPT Health?

The AI tool has several new features aimed to personalise its answers.

According to OpenAI, users will be able to connect their ChatGPT Health account with medical records and smartphone apps such as MyFitnessPal. This would allow the tool to use personal data about diagnoses, blood tests, and monitoring, as well as relevant context from the user’s general ChatGPT conversations.

OpenAI emphasises information doesn’t flow the other way: conversations in ChatGPT Health are kept separate from general ChatGPT, with stronger security and privacy. The company also says ChatGPT Health data won’t be used to train foundation models.

OpenAI says it has worked with more than 260 clinicians in 60 countries (including Australia), to give feedback on and improve the quality of ChatGPT Health outputs.

In theory, all of this means ChatGPT Health could give more personalised answers compared to general ChatGPT, with greater privacy.

But are there still risks?

Yes. OpenAI openly states ChatGPT Health is not designed to replace medical care and is not intended for diagnosis or treatment.

It can still make mistakes. Even if ChatGPT Health has access to your health data, there is very little information about how accurate and safe the tool is, and how well it has summarised the sources it has used.

The tool has not been independently tested. It’s also unclear whether ChatGPT Health would be considered a medical device and regulated as one in Australia.

The tool’s responses may not reflect Australian clinical guidelines, our health systems and services, and may not meet the needs of our priority populations. These include First Nations people, those from culturally and linguistically diverse backgrounds, people with disability and chronic conditions, and older adults.

We don’t know yet if ChatGPT Health will meet data privacy and security standards we typically expect for medical records in Australia.

Currently, many Australians’ medical records are incomplete due to patchy uptake of MyHealthRecord, meaning even if you upload your medical record, the AI may not have the full picture of your medical history.

For now, OpenAI says medical record and some app integrations are only available in the United States.

So, what’s the best way to use ChatGPT for health questions?

In our research, we have worked with community members to create short educational materials that help people think about the risks that come with relying on AI for health advice, and to consider other options.

Higher risk

Health questions that would usually require clinical expertise to answer carry more risk of serious consequences. This could include:

Symptom Checker, operated by healthdirect, is another publicly funded, evidence-based tool that will help you understand your next steps and connect you with local services.

For now, we need clear, reliable, independent, and publicly available information about how well the current tools work and the limits of what they can do. This information must be kept up-to-date as the tools evolve.

Julie Ayre, Post Doctoral Research Fellow, Sydney Health Literacy Lab, University of SydneyAdam Dunn, Professor of Biomedical Informatics, University of Sydney, and Kirsten McCaffery, NHMRC Principal Research Fellow, Sydney School of Health, University of Sydney


Apple's New AI Offers Image Editing With Natural Language Prompts

 ChatGPT made Large Language Models one of the most cutting edge types of technology out there, but in spite of the fact that this is the case, we are already seeing the rise of MLLMs, or Multimodal Large Language Models, that can process images as well as text. Apple has just released its own MLLM dubbed MGIE, and it might represent the next step forward in the AI race with all things having been considered and taken into account.


The main thing that sets MGIE apart is its ability to edit images based on natural language instructions. Prompts don’t have to be delivered in a way that would be interpreted solely by an AI, but rather with normal everyday language, similar to the instructions one would give to a human image editor.

With all of that having been said and now out of the way, it is important to note that MGIE uses its MLLM to translate plain language into more technical instructions. For example, if a user were to give the instruction to make the sky in a particular picture a deeper shade of blue, MGIE will translate this into an instruction that asks to increase the saturation of a particular region by 20% or so.

On top of all of that, MGIE leverages its distinct end to end training scheme to create a latent representation of the end result that the user is looking for, which is referred to as visual imagination, and it subsequently derives the core of the instructions to edit each and every pixel accordingly. Such precision can be enormously useful because of the fact that this is the sort of thing that could potentially end up allowing edits to be made far faster than might have been the case otherwise.

MGIE can optimize photos, edit them, manipulate them or do anything else that a user might end up requiring. It is currently available as an open source model on GitHub, allowing users from around the world to take advantage of this AI breakthrough that Apple has made in collaboration with the University of California.

Photo: Digital Information World - AIgen

AI Images And Deepfakes Displaying Child Abuse Could Be Criminalized, EU Confirms

 The EU is gearing up to criminalize serious offenses such as the display of child abuse through AI imagery and deepfakes.


The country’s regulatory bodies have been calling it an act that was a long time coming, especially because so many laws continue to spring up as a means of curbing the matter and the rise in tech developments.

In the same way, it’s making proposals linked to calling the act of showcasing child abuse through livestreams a criminal offense as well. They are also making ways to ban the exchange of pedophile manuals as they would be called a criminal offense under this plan.

This is said to be a part of a wide plan regarding measures of the EU which hopes to boost such laws and more in the future. The online risks are serious in this case and from what we’re seeing right now, it’s getting more difficult for users to curb the matter while victims are finding it harder to report such crimes.

Right now, the proposal in question goes back to 2011 and it has seen a major upgrade than what came about in the past. In 2022, the Commission spoke about rolling out technologies for the right detection of child abuse across various platforms.

The CSAM scanning plan has been said to be a very controversial ordeal and that’s probably why we’re seeing many lawmakers speak against tech giants who are not taking the right measures and curbing such acts.

The decision to keep such acts as a leading priority has gotten a lot of criticism many times as experts and lawmakers mentioned how there is not enough focus being done in the right area. There is a lot of pressure coming from all directions and the matter is said to be controversial.
There has been a lot of controversy in terms of private message scans and how deepfakes continue to be at an all-time high. Child abuse is really something that the tech world has been struggling with for years. And now that AI has entered the picture, it’s going from bad to worse.

The plan involves identifying those who are at risk and which content is real and which is fake.

The commission stressed the growth in tech developments and the many possibilities taking center stage. This means the need for greater scrutiny is more now than ever.

Photo: Digital Information World - AIgen

YouGov: 82% Oppose Brain Chip Implants, 10% Undecided, 2% Willing for Testing, 5% Open Within Year

 Elon Musk has announced that his company Neuralink has started experimenting with chips for human brains to connect the brain with technology. The company has already planted the chip in a human test subject. This news brought mixed reactions from the public and to see what people think of it, YouGov conducted a poll in February 2024. A total of 1,000 people responded to the poll and it was found that only 8% of the respondents would get a chip in their brain if it's sold publicly.


82% of the respondents responded with a firm no when asked if they would want to have a chip implanted in their brain. 10% of the respondents were indecisive so they marked undecided. 2% of the people in the poll were ready to volunteer as a human test subject to get a chip implanted in their brain within the next year. Overall, 5% of the respondents said that they would like to get the chip in their brain within the next year.

Men (13%) were ready for chips to be available commercially as compared to 4% of women. Democrats and independents (10%) were also more comfortable with this piece of technology in the markets as compared to Republicans.

We often see the idea of human chips for altering brain functions or giving more mental capabilities in many science fiction novels and movies. The respondents who agreed with the idea of having brain chips are probably those who have these sci-fi novels and movies in their minds. Neuralink is playing with the minds of people who have read Dune and Ender’s Game. 19% of the respondents who are ready to buy the chip have read Dune and 11% who have read the novel are ready to get the chip this year.

However, there is also a book called Flowers of Algernon, a short story about the disastrous effects of chips on the human brain. 13% of the respondents who have read this short story are also excited about the chip, despite knowing some of its effects from the story.

Photo: Digital Information World - AIgen

How Transparent Are Marketers About Using AI?

 The rise of AI has made it so that countless marketers are relying on these tools in order to create content for their clients. This begs the question, how transparent are marketers about this practice? Filestage conducted a survey back in December 2023, and it shed some light on the level of transparency that can currently be seen in the industry with all things having been considered and taken into account.


It bears mentioning that the proportion of marketers that tell stakeholders about their use of AI depends on who these stakeholders actually are. For example, 77% of marketers working for start ups told their superiors about using AI, but just 46% of marketers employed at production companies said the same. The proportions for agencies, established brands and freelancers were 61%, 59% and 55% respectively.
With all of that having been said and now out of the way, it is important to note that start up marketers also used AI a bit more frequently than might have been the case otherwise. 38% of them said that they use AI on a daily basis, with 24% of freelancers saying the same. Meanwhile, just 19% of agency and production company employed marketers used AI daily, with only 11% of marketers working for established brands reporting daily AI usage for content creation.

As for the type of impact that AI is having on various aspects of marketing work, 83% of the people that responded to this survey stated that it had a positive effect on their overall productivity levels. 62% said that profitability saw some positive momentum, with 59% saying that it positively impacted their creativity.

In spite of the fact that this is the case, many survey respondents were of the opinion that AI brought a lot of bad side effects as well. For example, 32% said that it had a negative impact on their self esteem, with 25% saying that it negatively influenced the level of recognition that they received for the work that they were doing as well. It will be interesting to see how the trend changes from this point onwards.




News Startup Rolls Out AI Editorial Tool That Uses Microsoft Bing And ChatGPT To Generate Multi-Source News Feed

 Popular news site Semafor has just rolled out a powerful AI-based editorial tool dubbed Signals.


The tool is said to make use of AI-based research, attaining help from Microsoft’s Bing and ChatGPT. What users get in the end is a news feed from various sources around the globe, as mentioned by the startup on Monday.

Microsoft is providing full sponsorship for Signals and also rolling out a disguised amount in the name of funding, as mentioned by The Financial Times. It created the customized AI bot so that journalists can benefit from this OpenAI platform as well as Bing.

Meanwhile, Site Editors spoke about the findings arising from the chatbot and how it was able to generate all kinds of accurate answers, produce synopses, and also cite sources if need be.

Meanwhile, another post was found on X where the editor-in-chief spoke about the great efforts involved in bettering the digital media landscape that’s been broken for years.

They spoke about how AI tools are the future and it’s not used solely for the likes of generating articles. Instead, it would be used for assisting the entire research ordeal seen on branded stories that are found on Signals.
Semafor mentioned how AI technology is taken advantage of to reduce bias and mistrust through the likes of distillation from a series of global sources.

On this particular site, we saw stories about signals being produced using AI technology and how it’s labeled with Semafor Signals and attained support from software giant Microsoft.

There’s even one news section on the Signals that combines context for every news story from all sorts of sources across the board.

Signals pop up on the homepage for Semafor and that’s going to be included in the real newsletter as per this statement generated.

Semafor made the decision to embrace AI tools and it’s quite different from what’s seen on various other media outlets like The New York Times. The latter is the one who rolled out a lawsuit against the tech giant and ChatGPT maker in December for things like copyright infringement.

Semafor making use of AI is not going to be like making use of the tool to generate articles through the technology. Instead, it has to do with G/O Media which has various other sites under its ownership like Gizmodo and Kotaku who have put forward AI-produced content that was critiqued by its workers in the previous year.

Photo: DIW - AIgen

AI Lobbying Reaches All-Time High With 185% Increase From Last Year, Alarming Study Proves

 A new and alarming study has taken center stage and it has proven how AI lobbying has attained record-breaking figures.


The surge in AI lobbying that arose in 2023 was so massive that figures depicted a whopping 185% rise from stats seen in 2022. The change is huge and in the past, we saw close to 158 different firms participating and now, it’s nearly 450 different companies that are a part of the activity which has experts talking.

The news comes to us thanks to federal lobbying findings that were analyzed by the likes of OpenSecrets in collaboration with CNBC.

The huge rise in AI lobbying arrives at a time when there are more calls for regulating AI and now, we’re watching President Biden’s admin start codifying such regulations. Remember, firms that started lobbying during the start of 2023 wished to have a word on how regulation would impact the businesses entailing the likes of TikTok and Tesla.

Others that were a part of the list included Spotify, Shopify, Pinterest, Samsung, Nvidia, Dropbox, DoorDash, Palantir, Instacart, OpenAI and more. So as you can see, the list is quite comprehensive.

There are hundreds of firms that ended up being lobbied across AI last year and they followed suit from leading AI startups and tech giants, but that was not all. We saw insurance firms, finance companies, pharma companies, academia, and even telecom become a part of this.
Until the start of 2017, we saw a leading number of firms reporting about AI lobbying, and that stood across single digits. As per this report, we’re seeing how the move keeps growing at a steady pace but with time, it would increase further.

A whopping 330 different firms were witnessed lobbying against AI in the past year and they ended up doing this in 2022 as well. The information put out a range of industries with new entrants arising in this lobbying system and that had names like AMD and TSMC.

Other names on the list included the likes of Disney and Appen. There were companies reporting lobbying across other AI issues on many other matters that related to the government.
As a whole, they spoke about spending close to $957 million in just 2023 alone on a series of matters as per reports from OpenSecrets.

Then in the month of October, we saw President Biden mentioning more in detail about executive orders entailing AI and how the American government took on actions that were new and unique. It required safety assessments and guidance about civil rights as well as AI research and what effect it would have on today’s labor market.

The goal was now to give rise to a system featuring guidelines for examining particular AI models and this entailed testing environments related to them. They would partly be in charge of creating consensus-dependent standards related to AI.

After executive orders were rolled out, there was a lawmaker frenzy with many industry groups and labor unions as well as certain others digging into a massive 100-page long document that made particular notes on priorities as well as particular deadlines.

During the start of December, we’ve seen the NIST rollout public comments from many firms including how they would like to shape regulations. There were specific responses related to how particular AI standards would be created and how AI systems could be best tested.
It would manage risks pertaining to generative AI and how it assists in limiting risks related to fake content made through AI.

Image: Digital Information World - AIgen