Showing posts with label issues. Show all posts
Showing posts with label issues. Show all posts

Apple Blames Poor Functioning Of iPhone Web Apps In The EU On The Stringent Digital Markets Act

 Tech giant Apple is causing concern after its iPhone web applications in the EU were malfunctioning across user devices.


Many assumed it was a serious bug that needed a fix but new reports are proving otherwise.

These progressive web applications found across the EU had many citizens complaining about how they were not functioning correctly after being installed in some recent iOS beta versions.
Apple was seen updating its respective websites to give out a justification on this front and why users were facing issues across the board. It’s not a huge surprise because the company mentioned how the ordeal has nothing to do with it but in fact, has opted to blame the matter on the stringent Digital Markets Act.

They feel the complexities involved are massive and one contributing reason has to do with enabling various browser engines to function that is why we’re seeing the matter unfold to begin with.

To keep you better updated on this front, the company came under fire when one top security researcher was quick to see PWAs being demoted through various webpage shortcuts through the release of iOS 17.4. It’s not quite clear if this was a real beta bug or if this was done to undermine any functionality of these respective PWAs across the European Union.

Apple is being forced to enable alternative app stores to function in this part of the globe, not to mention payments arising from third parties and other browser engines coming into play. In such betas, these PWAs are known for typically enabling web applications to function and they feel more like their native iOS applications weren’t working.

In such experiments, PWAs are designed to enable web apps to work normally and feel like they’ve been designed to ensure the platforms feel like they work like native Apple applications. But in this particular case, it just was not working. So this did not go unnoticed by the masses.

Developers saw how such web apps were functioning like bookmarks that happened to be saved across the Home Screen.

As reported by tech giant MacRumors, the latest update for iOS 16.4 enabled PWAs to showcase icons with alerts, similar to how native apps were functioning.

In the latest update by the firm, it says that the systems have been modified instead of the DMA.

Reports debunk bug theory; blame EU iPhone web app glitches on Digital Markets Act, not Apple.
Photo: DIW- AI-gen

AI Images And Deepfakes Displaying Child Abuse Could Be Criminalized, EU Confirms

 The EU is gearing up to criminalize serious offenses such as the display of child abuse through AI imagery and deepfakes.


The country’s regulatory bodies have been calling it an act that was a long time coming, especially because so many laws continue to spring up as a means of curbing the matter and the rise in tech developments.

In the same way, it’s making proposals linked to calling the act of showcasing child abuse through livestreams a criminal offense as well. They are also making ways to ban the exchange of pedophile manuals as they would be called a criminal offense under this plan.

This is said to be a part of a wide plan regarding measures of the EU which hopes to boost such laws and more in the future. The online risks are serious in this case and from what we’re seeing right now, it’s getting more difficult for users to curb the matter while victims are finding it harder to report such crimes.

Right now, the proposal in question goes back to 2011 and it has seen a major upgrade than what came about in the past. In 2022, the Commission spoke about rolling out technologies for the right detection of child abuse across various platforms.

The CSAM scanning plan has been said to be a very controversial ordeal and that’s probably why we’re seeing many lawmakers speak against tech giants who are not taking the right measures and curbing such acts.

The decision to keep such acts as a leading priority has gotten a lot of criticism many times as experts and lawmakers mentioned how there is not enough focus being done in the right area. There is a lot of pressure coming from all directions and the matter is said to be controversial.
There has been a lot of controversy in terms of private message scans and how deepfakes continue to be at an all-time high. Child abuse is really something that the tech world has been struggling with for years. And now that AI has entered the picture, it’s going from bad to worse.

The plan involves identifying those who are at risk and which content is real and which is fake.

The commission stressed the growth in tech developments and the many possibilities taking center stage. This means the need for greater scrutiny is more now than ever.

Photo: Digital Information World - AIgen

Former CIA Software Developer Sentenced To 40 Years In Prison For Confidential Data Breach To WikiLeaks

 An ex-CIA software developer has been subjected to a whopping 40-year-long jail sentence, sparking shock on Thursday.


The man who goes by the name of Joshua Schulte received a four-decade-long sentencing for leaking high-profile and confidential data of the CIA to WikiLeaks. And for those who might not be aware, this is the name given to a massive offense and one of the worst data breaches done in history.

The Southern District of New York confirmed yesterday how the court’s decision was massive but a long time coming. Remember, it comes exactly eight years after we saw and heard the news about the developer stealing archive copies that were handed personally by him to WikiLeaks.

He was deemed guilty of the likes of espionage and hacking while generating false statements toward the FBI. Meanwhile, more shocking findings had to do with rolling out fake statements toward the FBI which is a major crime.

Schulte made use of unauthorized privileges from the administration in terms of copying CIA archives and then made attempts to hide his actions by changing the network and getting rid of plenty of log files along the way.

Schulte went on to give rise to a massive trove of CIA documents toward WikiLeaks that comes under the ownership of Julian Assange. He published the work under certain labels such as Vault 7 and Vault 8 that arose in 2017. These made use of operating systems like Tails and even Tor Browser to disguise the identities of those carrying out the act.
The developer was said to have betrayed his own nation with such crimes that the country’s history has ever witnessed. And when the FBI caught him in the act, he doubled down by adding how he wished to cause serious harm to the country by rolling out information wars regarding top secret data publishing taking place behind bars.

More details from the court’s decision added how he caused serious harm to America’s national security and risked people’s lives and those linked to the CIA. He persistently put out more efforts on this front, despite his arrest taking place.

The data leak incident was said to damage the CIA’s works related to foreign intelligence collection against America’s adversaries. It cost the governmental agency millions, as confirmed recently by the American Attorney’s Office.

During the search rolled out by the FBI in regards to Schulte’s home based in New York, we saw a series of government agents find thousands of encrypted pictures and videos featuring minors undergoing abuse and a wide range of other explicit themes as mentioned by a statement generated on Thursday.

As mentioned during Schulte’s sentencing, there is more information on how it comes after a series of legal trials finished during the years 2020, 2022, and 2023 so that in itself says a lot.

Photo: Digital Information World - AIgen

Your Smartphone Might Be Giving You ADHD, Here’s What You Need to Know

 The notion that children can somehow stop experiencing the symptoms of ADHD as they get on in years has been thoroughly debunked by science, but in spite of the fact that this is the case, there is a chance that smartphones might actually be giving adults ADHD without them even realizing it. This type of disorder usually originates in children under the age of 12, but it seems as though smartphones are causing adults to develop the symptoms related to it as well with all things having been considered and taken into account.


It bears mentioning that the symptoms can look rather different in adults than they do for children, with anger management issues, excessive restlessness, low self esteem, trouble with relationships, a lack of time management skills and many others factoring into the mix. With all of that having been said and now out of the way, it is important to note that the proportion of adults with ADHD was around 6.3% in 2020, which is a significant uptick from the 4.4% that had been diagnosed with it in 2003.

According to research that was published in the journal known as Journal of the American Medical Association, the use of digital media can actually increase the likelihood of an ADHD diagnosis by as much as 10%. What's more is that adults are often required to multitask at this current point in time, which can be harmful because of the fact that this is the sort of thing that could potentially end up pulling their minds in too many different directions at once.
There is now considerable clinical evidence pointing to the reality that using too much technology can lead to ADHD symptoms down the line. Of course, there is also the chance that they might be caused by hormonal changes or a wide range of other unrelated circumstances, but the connection between ADHD and digital media consumption as well as smartphone usage can’t be ignored. It will be interesting to see where things go from here on out, since the findings presented in this research point to something extremely pertinent to modern life.

Photo: Digital Information World - AIgen

We're Losing the Fight Against Corruption, Here's Why

 Up until the mid 2010s, the general trend around the world was that corruption had started to decline somewhat, which led many to assume that it would soon become a thing of the past. In spite of the fact that this is the case, a general decline in law and order that has been going on since 2016 has led to corruption making a comeback, and it might even get far worse than might have been the case otherwise.


With all of that having been said and now out of the way, it is important to note that 23 countries have seen their corruption levels become the worst they’ve ever been in the past 30 years or so. This data is coming out of Transparency International, and it also points to a rise in authoritarianism all around the world. However, it bears mentioning that democratic countries aren’t immune to this backsliding either.

The UK, Sweden, the Netherlands and Iceland are all democratic countries, but they all hit their lowest scores on the index since it first started being recorded. Even so, Netherlands remained in the top ten, as did Sweden, where they were joined by the likes of Norway, Germany, Luxembourg, Singapore and Switzerland with all things having been considered and taken into account.

On the other end of the spectrum, Somalia had the lowest score of all with 11 points out of 100. Venezuela, Syria and South Sudan were tied for second worst place with a score of 13, followed by Yemen with 16, and then North Korea, Nicaragua, Haiti and Equatorial Guinea with 17 points.

The US stood at 24th place with a score of 69, and the global average currently sits at around 43 points. This is the same average that we have seen for the past 12 years now, with two out of three countries scoring under 50 points.

A weak judicial system in countries like Poland and Hungary contributed to low scores of 54 and 42 respectively, whereas Russia saw its lowest score yet with just 26. Meanwhile, Ukraine provided some hope by continuing its 11 year growth streak with 36 points, despite Russia barging in through its door.

Transparency International data signals a rise in authoritarianism worldwide, affecting even traditionally democratic nations.

California Lawmakers Propose New Bills For Kids Protection From Social Media Addiction

 The state of California is willing to roll out a list of bills that are designed to ensure kids remain protected from all kinds of social media harms at all times.


The lawmakers in the state have floated a few ideas regarding the privacy of minors’ data and also about how changes to laws from the past also need to be discussed in detail. The new laws arose after a state law regarding a safety bill for kids was launched and said to be rolled out soon but for now, that’s not happening and it’s been put on hold.

The new law would give parents the chance to get rid of addictive algorithms and feeds belonging to their kids’ social media channels. Once that’s passed, it would enable parents of those kids below the age of 18 to choose whether or not they could attain access to apps online during their school hours or night hours.

Social media firms have created platforms to ensure users are not addicted and that means kids too. So many studies have gone on to speak about how youngsters are getting addicted to the likes of depression, low self-esteem, and even anxiety.

For a while now, social media firms have been working hard in terms of trying to ensure the right safety and safeguards are in place so parents remain on alert at all times and stop such harms from arising.

For a while, the AB 1949 would establish greater control of privacy and security for those below the age of 18. This law would the country’s users the chance to realize what kind of personal data various social media firms collect and sell out, enabling them to stop the sale of kids’ data belonging to third-party individuals. Any exception to this case would force people to have some informed consent that should be linked to a parent who has kids below the age of 13.

Additionally, the new law would shut down loopholes in the CCPA that failed in terms of protecting the information belonging to 17-year-olds as a whole. And if you did not know, the CCPA holds the right to ensure the right guards are in place for those below 16.

This new law is a serious step in terms of the world's need to shut the gaps in the privacy laws that enable tech giants to exploit and make money off of kids’ sensitive data featuring impunity.

The new laws may arrive at a time when they’re in coincidence with the American Senate and have hearings that cover online safety belonging to kids. Additionally, the state of California happens to be a part of a mega 41-state coalition that carried out legal action against Facebook’s parent firm as it harmed kids’ mental health.

Photo: Digital Information World - AIgen

ByteDance Unveils StreamVoice: AI-Powered Live Voice Conversion Raises Deepfake Concerns and Misinformation Risks

 ByteDance, the renowned Chinese technology firm responsible for the popular TikTok platform, has unveiled something new for its users—StreamVoice. This tool, leveraging generative-AI technology, enables users to seamlessly alter their voices to mimic others.


As of now, StreamVoice remains inaccessible to the general public, yet its introduction underscores the noteworthy progress in AI development. The tool facilitates the effortless creation of audio and visual impersonations of public figures, commonly referred to as "deepfakes." Notable instances include the use of AI to emulate the voices of President Joe Biden and Taylor Swift, a phenomenon particularly prevalent as the 2024 election looms.

Collaborating on this groundbreaking initiative are technical researchers from ByteDance and Northwestern Polytechnical University in China. It's imperative to note that Northwestern Polytechnical University, recognized for its collaborations with the Chinese military, should not be confused with Northwestern University in the United States.

In a recently published paper, the researchers underscore StreamVoice's capacity for "real-time conversion" of a user's voice to any desired alternative, requiring only a singular instance of speech from the target voice. The output unfolds at livestreaming speed, boasting a mere 124 milliseconds of latency—a significant achievement in light of historical limitations associated with AI voice conversion technologies, traditionally effective in offline scenarios.

The researchers attribute StreamVoice's success to recent advancements in language models, enabling the creation of a tool that performs live voice conversion with high speaker similarity for both familiar and unfamiliar voices. Experiments, as detailed in the paper, emphasize the tool's efficacy in streaming speech conversion while maintaining performance comparable to non-streaming voice conversion systems.

Referring to Meta's Llama large language model, a prominent entity in the AI landscape, the paper details the utilization of the "LLaMA architecture" in constructing StreamVoice. Additionally, the researchers incorporated open-source code from Meta's AudioDec, described by Meta as a versatile "plug-and-play benchmark for audio codec applications." Training primarily on Mandarin speech datasets and a multilingual set featuring English, Finnish, and German, the researchers achieved the tool's proficiency.

Although the researchers refrain from prescribing specific use cases for StreamVoice, they acknowledge potential risks, such as the dissemination of misinformation or phone fraud. Users are encouraged to report instances of illegal voice conversion to appropriate authorities.

AI experts, cognizant of advancing technology, have long cautioned against the escalating prevalence of deepfakes. A recent incident involved a robocall deploying a deepfake of President Biden, urging people not to vote in the New Hampshire primary. Authorities are currently investigating this deceptive robocall, underscoring the urgent need for vigilance in the face of evolving AI capabilities.

Content generated using AI and reviewed by humans. Photo: DIW - AIGen

NSA's Secret Web: General Nakasone Unveils Controversial Data Acquisition Tactics!

 

  • Gen. Nakasone reveals how NSA buys lots of Americans' internet data without permission for foreign intel and cybersecurity.
  • Netflow data shows internet traffic details, raising privacy worries for mental health and assault survivor sites.
  • Senator Wyden reveals NSA's domestic data collection, worries about agencies getting Americans' data without asking.
  • ODNI urged to make spy agencies follow rules like FTC's for legal data buying and be transparent about data keeping.
The departing chief of the U.S. National Security Agency (NSA), General Paul Nakasone, has unveiled a revelation that raises eyebrows from privacy critics — the NSA is delving into an extensive pool of commercially available web browsing data from Americans, all without the encumbrance of obtaining a warrant. This disclosure, unveiled by Senator Ron Wyden after Nakasone's correspondence, peels back the layers on the NSA's acquisition of a diverse array of information procured from data brokers, serving purposes such as foreign intelligence, cybersecurity, and secret missions.


In Nakasone's letter, he highlighted the NSA's interest in commercially available netflow data, concentrating on the intricacies of wholly domestic internet communications and interactions involving a U.S. Internet Protocol address connecting with its overseas counterpart. Netflow data, a cloak-and-dagger trove of non-content metadata, reveals the nuances of internet traffic flow, unraveling the mysteries of network activities and spotlighting servers that may be harboring the mischief of potential hackers.

Despite the NSA's discretion regarding the specific origins of the purchased internet records, Senator Wyden voiced apprehension over the sensitivity of this internet metadata. He underscored its potential to lay bare private information linked to individuals' online ventures, encompassing visits to websites dedicated to mental health, resources for survivors of sexual assault, or telehealth providers specializing in birth control or abortion medication.

Senator Wyden, entrenched in the Senate Intelligence Committee, unearthed details about the NSA's domestic internet records collection back in March 2021. However, the disclosure couldn't see the light of day until it shed its classified status. The revelation adds a layer of complexity to the ongoing scrutiny of the U.S. intelligence community's penchant for acquiring substantial datasets from private data brokers. While this practice isn't a novel concept, the ODNI's acknowledgment in June 2023 spurred concerns about its ramifications on privacy and civil liberties.

The NSA's dependence on commercially sourced data for intelligence-gathering has thrown a legal spotlight on the agency, especially as Congress scrutinizes its surveillance powers. Senator Wyden has seized upon recent actions by the Federal Trade Commission (FTC) against data brokers like X-Mode and InMarket, viewing them as significant legal milestones. These actions spotlight concerns about government agencies procuring Americans' data without explicit consent.

The NSA contends that prevailing U.S. law doesn't tether them to obtaining a court order for commercially available information. They argue that such data is equally accessible to foreign adversaries, private entities, and the U.S. government alike. Senator Wyden advocates for the ODNI to enact a policy aligning with FTC standards for legal data sales. This would compel U.S. spy agencies to purge data that doesn't meet these standards, or if retention is imperative, inform Congress or the public.

While the NSA affirms its collection of commercially available internet netflow data, the ambiguity persists on whether the agency also dips into location databases, a practice observed in other federal government agencies. Nakasone clarified in his letter that the NSA refrains from acquiring and using location data from phones or vehicles known to be within the United States, leaving room for interpretation concerning the acquisition of commercially available data originating from non-U.S. devices. The NSA, when probed, declined to expound on Nakasone's statements.

Note: Content is generated using AI and editing by humans. Photo: DIW - AIGen

The UN is Afraid of Killer Robots, Here’s Why

 Thanks to the rapid advances made in the field of AI, autonomous weapons systems, or killer robots in colloquial terms, might soon become a reality. As a result of the fact that this is the case, the UN has adopted a resolution to make these systems less effective than might have been the case otherwise. These types of weapons can acquire targets without any human involvement whatsoever, which makes them an especially dangerous outcome of the current AI race.


With all of that having been said and now out of the way, it is important to note that Harvard law lecturer Bonnie Docherty recently spoke out about this issue. She described autonomous weapon systems as systems that rely on sensor inputs to determine targets rather than human input. It turns out that they have actually been used multiple times in the past, although they are not quite as sophisticated as they would eventually end up becoming.
Systems used during the ethnic conflict in the Nagorno-Karabakh region were able to identify targets all on their own. The same can be said of the systems deployed in the Libya conflict, with some referring to them as loitering munitions. These weapons can hover over the field of battle and deploy their payloads as soon as an enemy target is detected, even if a human didn’t order the strike.

Needless to say, autonomous weapon systems come with a whole host of ethical concerns with all things having been considered and taken into account. It can reduce the taking of human life to a matter of numbers and data, which many consider to be crossing a line.

Algorithmic bias is also essential to consider because of the fact that this is the sort of thing that could potentially end up discriminating against people based on their ethnicity, gender and other aspects. Even disabled individuals could end up being targeted, with the AI based targeting systems unable to discern human rights in the appropriate circumstances.
Apart from ethical considerations, legal concerns have also arisen. Machines might not be able to differentiate between military combatants and humans that are present on the battlefield in a civilian capacity. Human judgement is essential in this regard due to the reason that weighing civilian casualties against military outcomes.

This involves something called the proportionality test, wherein someone or the other determines whether or not civilian loss of life justifies military action. For all of its advancement, AI can’t yet be programmed to display human judgement.

This raises another important question that must be asked. If the AI can’t show judgement, how can it be held accountable for any potential atrocities or crimes against humanity? At the same time, the operator of the system can’t be held accountable either, since they’re not technically the one that ordered the attack.

So far, any attempts to ban autonomous weapon systems have met stiff resistance from countries like Russia. Even the US and the UK have proposed non-binding resolutions in order to leave the door open for future use of these systems should the need arise. Indeed, a number of countries prefer non-binding resolutions, with each of them coincidentally developing autonomous weapon systems of their own.

As it currently stands, the UN is trying to collect civil opinions on the matter at hand. 164 member states voted in favor of this resolution, and it will be interesting to see where things go from here on out. According to the UN Secretary General, a new treaty might be coming as early as 2026. If it fails to reach the required number of voters, the potential loss of life might be staggering. Unlike landmines and other munitions, these aren’t tried and tested weapons yet, which might make obtaining a vote harder in the long run.

a

Meta’s Instagram Is Full Of Fake Profiles That Are Catfishing Users But The Company Could Care Less

 Seeing scammers and imposters arise on social media is now a norm in the online world.


But you’d expect tech giants like Meta to do more to help safeguard its users online by getting rid of fake profiles. However, the reality seems to be far from that as many noticed the number of fake profiles surging across the Instagram app.

In the past year, we’ve seen the issue go from bad to worse and the app’s parent firm is really falling behind in terms of finding a solution to the matter, despite there being many signs that a certain profile is making use of another’s identity or image.
An investigation was also carried out by tech media outlet Bleeping Computer on this front and they noticed how a large number of reports were filed against such scam accounts featuring fake IDs and they were impersonating internet personalities or other public figures but ended up getting dismissed by the admin. Clearly, it’s a huge issue and no appeal made a difference, not to mention how the profiles continue to function on the app as we speak.

Conceptual image created with AIgen

After seeing all of this, it would not be wrong to mention that Instagram has transformed into a giant safe haven where scammers are working at large. People are interacting with others based on what they appear like on the outside or what their profile says, only to find out later on that it’s all a scam and nothing is real.

their profile says, only to find out later on that it’s all a scam and nothing is real.

Authenticity on social media is rare as it is and now that fake profiles are going unnoticed by Meta’s Instagram we’re seeing a major issue arise here. To pretend like you are someone other than your true identity is concerning and a major sign of catfishing. Anyone can produce two identities for several reasons. One of the main ones is to separate their real or personal endeavors from their professional world. But you need to be honest at least, right?

A growing number of users are speaking about how they keep on generating complaints on this front and seeing Meta dismiss them and leave those fake ID accounts as it is has them wondering what’s going on and if any safeguards were really in place. All they give as a part of the justification is linked to how they are following Community Guidelines and using both human and tech for reviewing purposes. And yes, no appeals work either so what is a person supposed to do, right?

When leading media outlets ask Meta to shed light on what’s going on, they are yet to hear back from any of the company’s reps. And that again is another red flag worth a mention.

Could this be the latest ploy from the tech giant in terms of selling blue ticks?

We don’t think such acts are a mere coincidence. They are becoming far too normal on the platform and something needs to be done before it’s too late.

Plenty of imposters seem to be targeting real profiles of leading public figures, influencers, law enforcement officers, and creators involved in producing adult content. They then start following the followers located on the actual account. And their hope is to attain followers back to ensure they are getting the tag of being authentic. They then block the profile that they are copying and this ensures no contact is made with the real user in question.


Those who are suffering are the real ones who fear their identities are being used for catfishing purposes and they cannot do anything about it because Meta takes on a silent stance. So what could the reason be?

Well, the news is that Meta is forcing users to purchase blue ticks to try and ensure they attain greater protection perhaps they wish to increase their user numbers by not labeling these kinds of content as spam or a fake profile.

Today, the subscription is priced between $12 to $15 and it’s not cheap to get Meta Verified, not to mention an added business for the company with these staggering monthly earnings.

Meta's Facebook and Instagram Are the Most Data Hungry Apps According to This Study

 Surfshark recently did a deep dive into the top 100 most popular apps on the App Store, and they found that 20% of the data they are collecting is for tracking purposes with all things having been considered and taken into account. These apps were ranked based on 32 data points that they are currently in the process of collecting, all of which are defined by Apple’s privacy policy.


Facebook and Instagram were found to be the least privacy conscious apps of all. They collect all 32 data points, all of which are tied to the identity of the user in question, and 7 of which are used specifically for tracking purposes. These include data points such as names, home addresses as well as phone numbers, which seems to suggest that these apps offer far less privacy than users would ideally prefer.
With all of that having been said and now out of the way, it is important to note that X, formerly known as Twitter, was also a major offender in this regard. It collected fewer data points than Facebook and Instagram, or 22 to be precise, but in spite of the fact that this is the case, it used 11 of them to track users.

On the other end of the spectrum, Signal was found to be the most privacy conscious app of all, at least in terms of social media and instant messaging. It collects a single data point, namely the user’s phone number, and it doesn’t link it to the identity of the user nor does it utilize it for any tracking purposes whatsoever on third party platforms.

Interestingly, WhatsApp was also surprisingly privacy conscious despite coming under the Meta umbrella just like Facebook and Instagram. It didn’t use any of its collected data for tracking, making it, Signal and Telegram the only three apps to avoid this practice.
Users need to be educated about which apps are tracking them, otherwise they might not know when their privacy at risk. This study shows that some apps are continuing to find a way to track users despite Apple’s privacy policy.




A