Showing posts with label artificial-intelligence. Show all posts
Showing posts with label artificial-intelligence. Show all posts

ChatGPT Health promises to personalise health information. It comes with many risks

Image: Berke Citak / Unsplash

Many of us already use generative artificial intelligence (AI) tools such as ChatGPT for health advice. They give quick, confident and personalised answers, and the experience can feel more private than speaking to a human.

Now, several AI companies have unveiled dedicated “health and wellness” tools. The most prominent is ChatGPT Health, launched by OpenAI earlier this month.

ChatGPT Health promises to generate more personalised answers, by allowing users to link medical records and wellness apps, upload diagnostic imaging and interpret test results.

But how does it really work? And is it safe?

Most of what we know about this new tool comes from the company that launched it, and questions remain about how ChatGPT Health would work in Australia. Currently, users in Australia can sign up for a waitlist to request access.

Let’s take a look.

AI health advice is booming

Data from 2024 shows 46% of Australians had recently used an AI tool.

Health queries are popular. According to OpenAI, one in four regular ChatGPT users worldwide submit a health-related prompt each week.

Our 2024 study estimated almost one in ten Australians had asked ChatGPT a health query in the previous six months.

This was more common for groups that face challenges finding accessible health information, including:

  • people born in a non-English speaking country
  • those who spoke another language at home
  • people with limited health literacy.

Among those who hadn’t recently used ChatGPT for health, 39% were considering using it soon.

How accurate is the advice?

Independent research consistently shows generative AI tools do sometimes give unsafe health advice, even when they have access to a medical record.

There are several high-profile examples of AI tools giving unsafe health advice, including when ChatGPT allegedly encouraged suicidal thoughts.

Recently, Google removed several AI Overviews on health topics – summaries which appear at the top of search results – after a Guardian investigation found the advice about blood tests results was dangerous and misleading.

This was just one health prompt they studied. There could be much more advice the AI is getting wrong we don’t know about yet.

So, what’s new about ChatGPT Health?

The AI tool has several new features aimed to personalise its answers.

According to OpenAI, users will be able to connect their ChatGPT Health account with medical records and smartphone apps such as MyFitnessPal. This would allow the tool to use personal data about diagnoses, blood tests, and monitoring, as well as relevant context from the user’s general ChatGPT conversations.

OpenAI emphasises information doesn’t flow the other way: conversations in ChatGPT Health are kept separate from general ChatGPT, with stronger security and privacy. The company also says ChatGPT Health data won’t be used to train foundation models.

OpenAI says it has worked with more than 260 clinicians in 60 countries (including Australia), to give feedback on and improve the quality of ChatGPT Health outputs.

In theory, all of this means ChatGPT Health could give more personalised answers compared to general ChatGPT, with greater privacy.

But are there still risks?

Yes. OpenAI openly states ChatGPT Health is not designed to replace medical care and is not intended for diagnosis or treatment.

It can still make mistakes. Even if ChatGPT Health has access to your health data, there is very little information about how accurate and safe the tool is, and how well it has summarised the sources it has used.

The tool has not been independently tested. It’s also unclear whether ChatGPT Health would be considered a medical device and regulated as one in Australia.

The tool’s responses may not reflect Australian clinical guidelines, our health systems and services, and may not meet the needs of our priority populations. These include First Nations people, those from culturally and linguistically diverse backgrounds, people with disability and chronic conditions, and older adults.

We don’t know yet if ChatGPT Health will meet data privacy and security standards we typically expect for medical records in Australia.

Currently, many Australians’ medical records are incomplete due to patchy uptake of MyHealthRecord, meaning even if you upload your medical record, the AI may not have the full picture of your medical history.

For now, OpenAI says medical record and some app integrations are only available in the United States.

So, what’s the best way to use ChatGPT for health questions?

In our research, we have worked with community members to create short educational materials that help people think about the risks that come with relying on AI for health advice, and to consider other options.

Higher risk

Health questions that would usually require clinical expertise to answer carry more risk of serious consequences. This could include:

Symptom Checker, operated by healthdirect, is another publicly funded, evidence-based tool that will help you understand your next steps and connect you with local services.

For now, we need clear, reliable, independent, and publicly available information about how well the current tools work and the limits of what they can do. This information must be kept up-to-date as the tools evolve.

Julie Ayre, Post Doctoral Research Fellow, Sydney Health Literacy Lab, University of SydneyAdam Dunn, Professor of Biomedical Informatics, University of Sydney, and Kirsten McCaffery, NHMRC Principal Research Fellow, Sydney School of Health, University of Sydney


Shareholders Push for Apple to Open Up About AI Use

 At Apple's yearly meeting, big shareholders have a chance to suggest changes. This year, the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) wants Apple to share how it uses AI and its rules for using this technology safely.


Norges Bank Investment Management and Legal & General, two of Apple's biggest shareholders, are backing this idea. Norges Bank says Apple should think about how its work and products affect society. Legal & General talked to Apple about being more open with its AI plans but didn't get the details they wanted. They think Apple should be clear about how it uses AI and how it manages any risks.

A big advisory group, Institutional Shareholder Services, is telling Apple's investors to support this AI proposal. They believe Apple's current rules don't fully cover the risks of using AI. This makes it hard for shareholders to understand the dangers.


Apple, however, doesn't want this proposal to pass. The company says the report being asked for is too broad and could make them share secrets that could hurt their competitive edge. In the U.S., even if shareholders like an idea, it doesn't force the company to do it. But if more than 30% of investors like an idea, it usually makes the company think seriously about it.
Everyone is watching to see if Apple will introduce new AI features at its WWDC event this year. This push for more openness about AI use at Apple is part of a bigger conversation on how big tech companies handle new technologies and their impact on society.

Image: Digital Information World - AIgen

Stability AI Launches Stable Diffusion 3 to Lead in AI-Generated Images

 Stability AI has just announced Stable Diffusion 3, its newest and most advanced AI for making images. This release seems to be a move to stay ahead of new AI technologies from OpenAI and Google. While we're still waiting for more details, it's clear that Stable Diffusion 3, or SD3, introduces a fresh setup and aims to work well on many types of computers, although you'll need a strong one.


SD3 is built on an updated method called "diffusion transformer." This idea started in 2022, got an update in 2023, and is now ready to be used more widely. It shares some concepts with Sora, OpenAI's video-making tool. SD3 also uses "flow matching," a new way to make better-quality images without making the system too heavy.

The new version comes in different sizes, from smaller setups with 800 million parts to huge ones with 8 billion parts. This range means SD3 can run on various computers. Unlike AI tools from OpenAI and Google, you don't need to use an online service to access SD3.

Emad Mostaque, who leads Stable Diffusion, said on X that SD3 can understand and create videos too, which is something other big AI companies are focusing on with their online services. These features are still in the planning stage, but it looks like there won't be technical problems adding them later.

It's hard to say which AI model is best because none are fully out yet. However, Stable Diffusion is already popular for making all kinds of images with fewer restrictions on how or what you can create. With the launch of SD3, it's likely to start a new wave of AI-made content, once they figure out how to keep it safe.


Google Chrome’s Latest ‘Help Me Write’ Assistant Uses AI To Make Better Text Suggestions

 Google just rolled out a useful AI writing tool called ‘Help Me Write’ that can be found on the Chrome Browser.


The latest initiative makes text suggestions depending upon the context of the website it comes face to face with.

The new AI writing assistant can be found on Chrome and it provides ideas that assist users in producing things like reviews and inquires. Moreover, this feature is up for grabs in the English language only for those located in the US for now. But with time, we can see it do wonders for more people.

The tool first made headwaves after being unveiled by the Android maker in January of this year. And after the successful launch carried out this week, many cannot wait to get their hands on it.

The tool helps with producing all kinds of things like online reviews and inquiries as well as classified advertisements as well.

Google promises that with this new tool, writing is bound to become so much simpler for users. This has to do with the fact that it makes use of the firm’s Gemini model dubbed Help Me Write. The latter produces text depending on the context of websites that they happen to be browsing and the writing text field that it encountered at that moment in time.

For instance, when a user opts to sell items online, the Help Me Write tool could take on small product descriptions and expand those into something more polished and detailed in nature.

A recent blog post was generated by the company in this regard about how the tool can comprehend the context of pages that users happen to be on that make such relevant types of suggestions about content.

For instance, when you write reviews for track shoes, Google Chrome pulls out key points from a certain product’s page that show support for a recommendation. In this way, it gets more valuable for shoppers keen on making a purchase.

Google rolled out plenty of examples about how the Help Me Write tool works and what users can expect after utilizing it.

The latest Help Me Write AI feature assists in producing content through the use of contextual cues.

For instance, it would give rise to suggestions depending on the type of content available online and the specific text field that users might be interacting with.

For different kinds of writing tasks like producing online reviews and classified ads, it would enhance composition by displaying an expansion of small-scale descriptions into something more detailed in nature.

Lastly, it ensures content produced is quality-controlled and the best out there today. It draws from different types of product details regarding a certain webpage so that it can make the most informative content that potential readers can benefit from.

Interestingly, it can assist marketers in leveraging the tool to produce things like advertisements or online listings.

'Help Me Write' AI tool on Chrome assists in writing tasks, from reviews to inquiries, with contextual cues.

Study Shows Where to Go to Find an AI Job

 It’s a brave new world: we’ve got self-driving cars, airport robots helping you find your gate, and refrigerators that can tell you when you need milk. Regardless of their purpose, smart tech has more Americans attuned to the artificial intelligence (AI) landscape these days.


For many, that means excitement about the growing cache of AI jobs out there. It’s true that opportunities in AI are taking off, and a new study by moveBuddha shows hotspots where this niche job market is booming.

With California’s long-time dominance in tech and startups, it only makes sense that almost 25% of the country’s AI jobs are in the Golden State. But there’s plenty of gold to go around. Up-and-comers are competing for dominance, and two states even have more AI jobs per capita than California: Virginia and Maryland each have 6 jobs available for every 10,000 residents.
The study shows where AI jobs are taking off — and where job seekers might want to take off and relocate. This post shares more insight on what makes a state shine, and some ways digital tech watchers can predict future superstar AI locations.

Where are AI Jobs Growing?

It turns out that it’s pretty tough to knock a long-time tech king off its throne. With plenty of jobs, lots of jobs on a per-resident basis (which should put large states like California in perspective), and high salaries for engineers in AI, California is home to more AI jobs than other states. By a long shot.

But other states are seeing their share of the AI pie, too. Here’s the top ten list:
  1. California
  2. Virginia
  3. Washington
  4. Maryland
  5. Texas
  6. Colorado
  7. Massachusetts
  8. Pennsylvania
  9. Missouri
  10. North Carolina

Best and Worst States Infographic

Why are AI Jobs Growing?

California’s longstanding position as #1 can be attributed to a number of factors:
  • Leading universities: With a research and talent pipeline, AI jobs bubble out of educational institutions.
  • Existing tech and AI companies: While new states are luring companies all the time, California has some high-profile heavy-hitters. From Meta to Google and Apple, AI jobs with these companies put the state on top. Note that #3 Washington is also home to existing large tech enterprises that are behind the high number of job openings, like Amazon and Microsoft.
  • A robust venture capital ecosystem: AI ventures are fairly new, and new companies need a nurturing system in which to develop ideas and grow. It all takes funding, so newcomers often follow that cash to places like Silicon Valley.
  • Network effects: With existing traction, AI players go where the action is. That creates more opportunities, ideas, new companies, and eventually, even more jobs.
  • What’s the lesson here? Some common elements help small ecosystems gain traction and grow as AI hubs. It starts with a research and capital commitment to AI. New cities that can anchor their digital tech industries with these two key elements can see their AI sectors growing.

    Ranking the 50 U.S. states by best for AI jobs in 2024 Table

    Virginia and Maryland’s Growing AI Hubs

    It can be difficult to replicate California's magic elsewhere, but some strong AI hubs are capitalizing on these core factors and their own strengths to make it happen.

    For example, #2 Virginia is a powerhouse near national government services and contractors. They’re strong in industries like defense, where AI is becoming indispensable. Northern Virginia, in particular, plays host to a network of defense, cybersecurity, and intelligence firms. And those companies could easily kick-start the network effects that catapulted California to the top of the tech industry.

    Further, Northern Virginia has also been a hub for data centers. Outside the pricy reaches of the D.C. beltway, the expansive Virginia suburbs provide space to support the nation’s computing needs. That’s led to hardware and software experts finding jobs and support in the area. It may be inevitable that those who specialize in machine learning are now finding their services in high demand there.

    There are also strong universities including those in Washington, D.C., and nearby Maryland (which also makes the top ten list).

    Other Top Ten Keys for Unlocking AI Jobs

    Why are other states on the top ten list? Here are some big components of their success:
    • Texas plays host to large tech giants: Dell, IBM, and Texas Instruments have long had large presences in the state. Oracle, Hewlett Packard, and Tesla have moved their headquarters from California.
    • CU Boulder (Colorado) scored a huge grant for an AI learning center that led to collaborations with students, industry, and researchers. It’s all growing the area’s research prowess but also network effects and talent pool. It’s also 8th in the country in venture capital investment.
    • Massachusetts’ universities feed its AI pipeline: Harvard, M.I.T., and a host of East Coast ivies feed this biggest city in New England, keeping the talent coming. Its venture capital network ranks second, behind California, to keep that talent and companies in the state, learning and growing.
    • Pennsylvania boasts Carnegie Mellon University, with a top computer science department and a long history of AI research. There’s also the University of Pennsylvania, a public Ivy League brainiac.
    • Missouri AI job listings come from a diverse group of companies across industries. That economic foundation has helped spawn Kansas City and St. Louis tech incubators to nurture more talent. It seems to be paying off.
    • North Carolina has a growing population, and is especially focused on cybersecurity sectors in banking hubs like Charlotte, while the north of the state has the “research triangle,” including reputable research universities churning out not only AI tech patents but a startup ecosystem to nurture the companies that emerge from its universities.
    • While none of these emergent competitors comes close to the amount of support California companies have enjoyed, they’re on their way. And as California has shown, once there are a few players in the area, a hub attracts new talent, companies, and capital more easily. In the case of AI, tech hubs all over the country have begun finding they’re able to fuel growth outside the Bay Area.

      That diversity is great for jobs and for job seekers who aren’t into fog, or who are seeking better housing prices, fewer earthquakes, a different climate, or just want to realize their company’s potential without uprooting from their favorite states.

      Where to Go to Become an AI Superstar

      If you’re looking for a job in AI, consider educational hubs. They often come with the young energy of new companies, research support, and startup incubators. Not only are college towns great places for big arts and cultural innovation. They’re also bubbling over with tech ideas and have the educational resources to support them.

      AI engineers who aren’t interested in startups should also look to corporate roles. After all, AI is going to play a role in company growth regardless of whether a company is a tech power or a design house. Even pet food firms are getting in on AI, with data learning behind everything from inventory to security and beyond. These roles are growing in more diverse sites across the country, including Charlotte, North Carolina, and Kansas City, Missouri.

      Overall, AI job seekers are in a stronger spot than ever. AI jobs are becoming increasingly common everywhere, and pretty soon candidates may not have to ask, “Where should I move?” at all, but will have their choice of multiple remote jobs in the industry no matter where they choose to call home.
    •  In the meantime, job seekers should watch job listings in states with strong education and industry connections. Or, perhaps obviously, train their AI to do it for them.

Research Finds that Many of the Adults in the UK Are Not Aware that AI Generated CSAM is Illegal

 Research by The Lucy Faithfull Foundation found out that 40% of the people in the UK thought that sexual abuse material generated by AI (Artificial Intelligence) is legal in the UK and they didn’t know that it was illegal. The Lucy Faithfull Foundation is a UK based foundation that provides prevention and awareness against sexual abuse. The research also found out that 66% of people in the UK believe that AI is going to have harmful effects on children. On the other hand, 70% of the individuals in UK didn’t know that AI is being used for generating child sexual abuse material (CSAM).


88% of people in UK said that sexual or abusing images of people under 18 generated by AI should be illegal in the UK, 40% said that they assumed that it is legal in the UK. Keep in mind that it is completely illegal to generate, distribute and view sexual images of children in the UK. The foundation is working on raising awareness about how the offenders are using pictures of real children to make CSAM but there are serious consequences for these offenders in the UK.
According to the foundation, AI isn’t the only one responsible for making sexual images of children. The offenders also use faces of children who have been previously abused to turn their images into CSAM. Children who have been abused in the past go through the trauma again after their images turn into what they have faced in the past. The director of Stop It Now helpline, Doald Findlater, says that the public isn’t fully aware of how AI is being used to create sexual images of children. As AI is advancing, the public should make them educated on what harmful effects AI can cause to other people. There are also some speculations that some certain machine learning models are trained on CSAM. But this can only be done by combining two concepts like “child” and “explicit content”.


Image: Digital Information World - AIgen