Showing posts with label Facebook. Show all posts
Showing posts with label Facebook. Show all posts

Meta In The Hotseat As EU Watchdogs Express Concern Over Ad-Free Subscription Model

 Tech giant Meta is facing great scrutiny from watchdogs located in the EU.


The news comes after the rollout of ad-free subscriptions took center stage in 2023 but EU regulators feel the thought of the ‘pay or ok’ initiative is just worrisome and they’re making sure Meta knows how they feel about this.

In the recently published joint letter, several watchdogs from that part of the globe had people talking more on this front and speaking against the model for ad-free subscriptions.

The leading concern is that there is a $14 monthly fee attached which Meta would take to ensure user data was secure and privacy was maintained at all times. The firms who are expressing concern include those hailing from Norway, Germany, and the Netherlands.

There were plenty of concerned parties who felt Meta’s Pay or Okay policy might be too expensive and therefore they would find it hard to say no to the option that allows them to reject the offer.

The document similarly goes on to point out how news publishers have little income as a major share of profits stays with the advertisers themselves.
Given the results of this pay or okay policy, we’ve seen how it’s not too popular with experts either. Many companies also feel that the button for acceptance automatically is deemed to be illegal. But the thought of charging a fee for rejection of the option isn’t.

There is also a great mention of how results from this pay or okay front are even greater in regards to consent as close to 100% failed in seeing how charging thousands of euros for clicking the rejection button is legal when compared to the option for rejection.

Paying out a fee also means families are speaking close to $38,000 each year if 35 applications on their devices follow this pay-or-okay model for business strategy.

The subscriptions in place for no ads address matters like the latest in terms of regulatory developments and guidance as well as judgments shared by top-of-the-line regulators in the EU and the courts over the years.

Meta does not agree and says its goals are in line with the laws arising in the EU and also with the stringent Digital Markets Act but 28 top companies feel it robs users of real or free-of-choice decisions of how data is utilized.

Image: DIW

200,000 Private Records From Facebook’s User Database Stolen, Hackers Forum Confirms

 A hacker has just raised the alarm amongst Facebook users after confirming that 200,000 personal data record entries were reportedly stolen from the firm’s database.


The news is alarming for obvious reasons as the claims further went on to delineate how the cybercriminal dubbed ‘alogoatson breached contractors’ efforts that are in charge of Facebook’s cloud services. They stole part of the user database that featured a significant number of entries.

The information was rolled out by a leading threat actor dubbed ‘IntelBroker’ which is notorious for a long list of leaks that entailed data stolen through General Electric and a long list of high-profile attacks taking place.
This sample entails lists featuring full names, profile image links, and hashed passwords. Other than that, profile ratings, settings, and plenty of reviews were on display.

The hacker explained how the data that was compromised included the likes of Physical IDs.

This database was first rolled out in February and has close to 24k email IDs and a host of other compromised information. Media outlets tried to request tech giant Meta for more comments on this front but there is no response so far.

This is clearly not the first time that we’ve seen such measures take center stage where a firm like Facebook has become the center of attention in a long list of data leaks. We saw in 2022 how a database from the same tech giant featuring data records of close to 533 million users on Facebook went public online without any additional costs attached.
For a while now, the company has been slammed for enabling third parties to gather data belonging to users as was seen in the high-profile and infamous Cambridge Analytica scandal.

The danger is massive and cannot be ignored because it involves a large number of private data getting leaked that could potentially impact the lives of millions. So as you can see here, there’s a lot at stake.

So many threat actors managed to collect data for matters like phishing attacks, malicious attempts, and convincing attacks against a host of individuals where data was exposed.

Media outlets continues to update the top-of-the-line data leak checker to entail data arising from several different leaks. For this reason, warnings are generated so users continue to remain vigilant at all times and ensure top-level privacy and security with passwords that are not easy to break into.

Photo: Digital Information World - AIgen

H/T: Bleepingcomputer / Cybernews

Meta Considers Revisiting Its Hate Speech Policy After Massive Concern Over ‘Zionist’ Terminology

 Tech giant Meta is under pressure after greater concern grew surrounding its respective hate speech policy.


Many users were wary about the terminology linked to zionists and how it was being used for posts linked to the Arab and Jewish communities. 

The policy currently enables the use of the term in a political discourse. Still, it was removed when it had to do with Jews or Israelis directly, especially when it was in the context of a violent or dehumanizing manner. This was just confirmed through an email generated by a Meta rep who mentioned that it planned to invite others to discuss the matter in the future, as first spotted by TheIntercept.

Meanwhile, the email further mentioned how the firm considered reviewing this in context to posts and concerns of users who were the real stakeholders on this front.

This means we might soon be witnessing a possible change in policy while advocacy groups (including MPower Change and 7amleh) were seen questioning how these policies were getting enforced, and that entails whether the posts were made by the platform’s algorithm or humans.

For a while now, we’ve seen Meta be blasted for rolling out measures that were unfair as all pro-Palestinian content was getting censored as mentioned by one Meta rep.
The groups rolled out questions regarding such policies and how they would get enforced for detection and censoring of this kind of language.

The AI-based systems are designed to flag all posts deemed problematic. Right now, there’s no kind of human review and such firms were the ones who happened to be in attendance during the meeting taking place with tech giant Meta.

Right before that meeting took center stage, a whopping 73 different firms generated a letter to the company that added how extensions generated to the policy could mischaracterize chats regarding zionists. They would treat that as proxies and would encourage acts linked to Israel as well as antisemitism.
The move was designed to stop Palestinians from rolling out daily experiences from the world.

During the meeting, Meta shared examples of how plenty of posts would soon be removed and they included posts where zionists were dubbed rats.

Such kinds of decisions regarding content moderation would not include track records that were reliable and had to do with Palestinian protection and speech.

In this letter, the firms expressed serious concern about the lack of replies to the rise in censorship of content generated in favor of Palestinians. This has been at an all-time high for quite some time now.

The proposal is ineffective in combatting measures like antisemitism. It ignores issues fueled by the likes of Palestinian oppression during a time when so many courts and experts in the field of human rights would accept that something as severe as genocide is taking place in the Palestinian region.

“There is a real danger that such policy revisions would stifle free expression of voices speaking out against the Israeli government’s systematic violations of Palestinian rights, and its ongoing onslaught in Gaza, where a real and imminent risk of genocide looms large.”, expressed Alia Al Ghussain, Researcher and Advisor on Artificial Intelligence and Human Rights at Amnesty Tech.

Meta's policy allows the term "Zionists" in political discourse but removes it when linked to Jews or Israelis in a violent context.
Photo: Digital Information World - AIgen/HumanEdited

Zuckerberg Faces Tough Questions in Senate Over Meta's Role in Child Safety

 Mark Zuckerberg, the Chief Executive Officer of Meta, expressed his heartfelt apologies during a Senate session on online child safety topic, acknowledging the distress experienced by parents who attributed their children's tragic outcomes to Instagram. Senator Josh Hawley's inquiry prompted Zuckerberg's candid response, "I’m sorry for everything you’ve all gone through. It’s a terrible ordeal, and no family should endure the hardships yours have faced."


The Senate Judiciary Committee convened the hearing, titled “Big Tech and the Online Child Sexual Exploitation Crisis,” where Zuckerberg, alongside the CEOs of TikTok, Discord, X, and Snap, faced a barrage of queries from lawmakers. Holding snapshots of their children, parents confronted the tech leaders, donning blue ribbons advocating the "STOP Online Harms! Pass KOSA!" initiative, urging the enactment of the Kids Online Safety Act.

Upon Zuckerberg's entrance, audible disapproval emanated from some parents, underscoring the intense scrutiny Meta has faced concerning child safety issues on its platforms. While addressing parents, Zuckerberg's words weren't confined to the microphone but resonated on a livestream. Post-apology, he assured parents of ongoing efforts, emphasizing, "This is why we invest significantly and will persist in industry-leading endeavors to ensure that no one has to endure the hardships your families have faced.”

Throughout the hearing, Zuckerberg confronted rigorous questioning, notably about nonconsensual explicit content, drug-related fatalities linked to Meta's platforms, and various other concerns. Meta grapples with a federal lawsuit from numerous states, alleging intentional creation of "psychologically manipulative" features on Facebook and Instagram, concealing internal data that reveals harm to young users.
Senator Richard Blumenthal highlighted emails purportedly received by Zuckerberg from Meta’s global affairs director, Nick Clegg, indicating concerns about well-being topics such as problematic use, bullying, harassment connections, and suicidal self-injury. Clegg, a former deputy prime minister of the UK, communicated that Meta’s safety efforts were constrained by insufficient investment.

Senator Hawley referred to a 2021 Wall Street Journal investigation revealing Meta's awareness of Instagram's detrimental impact on teenagers' mental health. Zuckerberg contested Hawley’s presentation of these details as “facts” and claimed selective interpretation of the research.

Responding to questions from Senator Welch about layoffs in the trust and safety departments, Zuckerberg clarified that Meta's layoffs were not sector-focused. Senator Tillis emphasized a balance between the executives' humanity and their corporate responsibilities, encouraging continuous efforts to mitigate the negative impact of their platforms.

Zuckerberg disclosed to senators that Meta employs 40,000 individuals in its trust and safety division. The hearing underscored the ongoing challenges faced by major tech companies in balancing innovation with the responsibility to protect users, particularly the vulnerable demographic of children and teenagers.

Photo: United States Senate Committee on the Judiciary

Note: Content in this story is written using AI and edited.

Meta's Facebook and Instagram Are the Most Data Hungry Apps According to This Study

 Surfshark recently did a deep dive into the top 100 most popular apps on the App Store, and they found that 20% of the data they are collecting is for tracking purposes with all things having been considered and taken into account. These apps were ranked based on 32 data points that they are currently in the process of collecting, all of which are defined by Apple’s privacy policy.


Facebook and Instagram were found to be the least privacy conscious apps of all. They collect all 32 data points, all of which are tied to the identity of the user in question, and 7 of which are used specifically for tracking purposes. These include data points such as names, home addresses as well as phone numbers, which seems to suggest that these apps offer far less privacy than users would ideally prefer.
With all of that having been said and now out of the way, it is important to note that X, formerly known as Twitter, was also a major offender in this regard. It collected fewer data points than Facebook and Instagram, or 22 to be precise, but in spite of the fact that this is the case, it used 11 of them to track users.

On the other end of the spectrum, Signal was found to be the most privacy conscious app of all, at least in terms of social media and instant messaging. It collects a single data point, namely the user’s phone number, and it doesn’t link it to the identity of the user nor does it utilize it for any tracking purposes whatsoever on third party platforms.

Interestingly, WhatsApp was also surprisingly privacy conscious despite coming under the Meta umbrella just like Facebook and Instagram. It didn’t use any of its collected data for tracking, making it, Signal and Telegram the only three apps to avoid this practice.
Users need to be educated about which apps are tracking them, otherwise they might not know when their privacy at risk. This study shows that some apps are continuing to find a way to track users despite Apple’s privacy policy.




A

The Enormous Scale of GDPR Fines for Mark Zuckerberg’s Companies Revealed

 To date, the social media giant Meta has been penalized with a massive $2.8 billion (yes with B in USD) fine for going against the GDPR.


Meta continues to remain a favorite company by officials in the EU and they’ve been targeting the organization left and right for several violations of the GDPR.

It wouldn’t be wrong to refer to Ireland as the world’s ‘Leading Data Regulatory Authority’ that continues to impose massive fines on big technology companies in the industry who fail to follow the stringent GDPR law that came out in 2018.


Meta is a big player in the digital space and it’s been subject to legal actions for years, let’s not forget how fines worth billions have been handed over. GDPR breaches incurred by Meta have been a constant worry for Zuckerberg and from what we’re seeing right now, it’s not ending soon.

The company was penalized a massive $442 million in Sep. 2022 and then another massive one comprising $425 million was rolled out on Meta Limited in January of 2023. And as time went on, things did not get any better. It’s interesting how the Irish-based regulator has been penalizing the organization from the start of GDPR. The majority of fines that Meta has been forced to pay have arisen from here.

Remember, WhatsApp Ireland also led the pack in terms of fining the company in 2021 so we’re not quite sure what’s going on here. But one thing is certain, Meta’s got a lot to do to get back into the firm’s good books.

The Irish Data Protection Commission enforced a lion’s share record featuring 1.78 billion Euros in the past year.

Most tech firms have headquarters based in Ireland and it’s shocking how the country has been serving as the leading data privacy enforcer for so long. Fines have been hitting tech giants left and right and Meta is the hardest hit amongst them all. Now what could the reason behind all of this be?

Both Facebook and Instagram are now being called out for wrongful data processing. Their apps have breached the GDPR for most of the same reasons and many can’t help but wonder why Meta does not learn from the past.

The head for data privacy regarding cybersecurity mentioned how it appears that the country’s data regulatory body has more to do with its favorable environment than anything else. But recent reports have gone on to prove how the number of fines rolled out across the EU in the past year is mostly due to successful appeals in several jurisdictions. Many also feel this has to do with the fact that opinions have been divided regarding several decisions including the EU Data Protection Board.

In May of 2023, the company was barred from transferring data belonging to EU citizens into the US, which investigators said was a huge privacy violation. This saw cross-border penalties being implemented on the tech giant with a warning of such action to be repeated in the future. But now, we are seeing it being punished for unlawful data processing, leading to a loss worth billions.
Meta did come under fire for another leading reason. Its decision to roll ad-free subscriptions for those with Premium tiers was called out for breaching EU laws. So as you can see, failure in terms of complying with EU laws appears to be the main reason why Meta keeps being penalized.





Bombshell Documents Reveal Meta Intentionally Marketed Its Messaging Apps To Kids And Ignored High Volumes Of Explicit Data Sharing With Minors

 Shocking new documents about the domain of child safety from tech giant Meta were recently unsealed.


This has to do with the organization’s striving to fight a new lawsuit from the DOJ of New Mexico that was against the company and its CEO. These made some bombshell findings including how Facebook’s parent firm knew a lot of things that were going wrong but yet chose to keep a blind eye on the matter.

The findings include how the company intentionally chose to market messaging apps to minors while also ignoring the huge volumes of inappropriate and explicit data being shared between kids and adults.

The papers were unraveled on Wednesday and have now become a huge part of the complaint about several instances regarding employees working internally and raising doubts about how children were being exploited by the firm’s own messaging apps. The company did see the risks of DMs for both Messenger as well as Instagram but they still chose to keep a blind eye on the ordeal, knowing very well about the impacts of this act on underaged individuals.

They did not prioritize putting out safeguards and did not even bother to block features regarding child safety too as they felt it wasn’t profitable.

A statement rolled out to media outlet TechCrunch showed how the company’s AG told Meta and Zuckerberg that child predators were out there to exploit young kids. He also raised serious issues on how the company allowed E2E encryption for the app which started to launch last month.

In another lawsuit, we saw how the tech giant was bashed for not addressing the exploitation of minors through its app and how encryption without the right safety measures in place was going to lead to disaster as it was designed to endanger minors to a greater degree.

For so many years, the tech giant got warnings from its own employees regarding this ordeal and how its decisions were really going to have a devastating impact. But top executives failed to do anything on this front. They continued to downplay the situation and that in itself is illegal as it masks a serious issue that many called out to be severe or pervasive.


The lawsuit was first rolled out last month and it claimed that the apps on Meta are transforming into a market for top predators who can easily prey on their targets with ease. Seeing how the firm turned a blind eye to the abuse material after getting reported is a major source of worry for obvious reasons.

The paper was first rolled out in December and created a long list of decoy accounts that proved how 14-year-olds or those below that age group were targeted and the company failed to do anything about it.

press release about child exploitation is said to be more than 10 times as common here than seen on dark websites like Pornhub or even OnlyFans.

In response to complaints generated across this front, the spokesperson for the organization mentioned how he wishes to have teenagers safe at all costs and age-appropriate experiences can only be possible with the right tools. Meta says those are all in place at the moment and they are working hard to curb the issue and even hiring those with dedicated careers to ensure everyone is safe and remains supported at all times online.

The complaint on this front has led to a dark spot in the company’s reputation and they hope to remove that by working with the right organizations. But at the same time, Meta blasted all those who cherry-picked papers to display the company’s ugly side and therefore it finds it appalling how anyone thinks that mischaracterizing specific quotes would do any good in terms of handling the matter.


The unsealed papers proved how the company tried long and hard to hire children and teens on the app, limiting safety measures along the way. Meanwhile, a presentation from the year 2016 has gone on to prove how so many teens were spending more time on these specific apps than on Facebook. So to outline a plan to win the younger generation over is never recommended, it added.

Another internal email generated from the year 2017 proved how Facebook’s executives said no to scanning the Messenger app for things like harmful content. This is because they felt it would serve as a competitive disadvantage versus other platforms that are offering greater means of privacy.

The fact that the tech giant knew about the services being popular with youngsters and still failing to protect them against the likes of exploitation just goes to show how it’s shocking that so much was done and yet the company failed to take action, especially when kids as young as 6 to 10-year-olds were involved.

The company acknowledging issues about child safety on the app is seriously damaging. One internal presentation arising from 2021 spoke about 100k kids each day getting harassed sexually through such apps. They got explicit content in the form of images of private parts. The company had more complaints generated against it including that from Apple executives who wished to have the apps removed from the App Store after a 12-year-old was targeted through Instagram in this regard.

Apple says that such endeavors really do tick people off. So many employees questioned the company about whether they had any timeline in place that would prevent adults from texting minors through Instagram Direct.

Meanwhile, other internal documents spoke about how it was revealed that any safeguards in place through the Facebook app did not exist on the platform. This meant implementing such security measures to keep people safe was never a priority to begin with.

In fact, seeing adult relatives reach out to minors through the DM direct method was seen as a huge growth bet and a less favorable reason to create safety features. The worker also found how grooming took place twice as frequently on Instagram as it did through the Facebook app.

Meta even spoke about the whole grooming episode in March of 2021 where it reiterated how it had stronger checks in place for detecting and measuring the situation for Facebook and the Messenger app than all others seen in comparison to Instagram.

This had to do with comments linked to sexualization that were left on the posts of minors through the Instagram app. They are the ones who called the problem out as a disappointing experience for all those involved.

But Meta’s spokesperson keeps on repeating to TechCrunch how they continue to make use of the most sophisticated technology and share information and data with other companies including state attorney generals so that predators may be ruled out. In a single month, we saw close to half a million accounts being called out for having their policies violated with child safety.


So as you can clearly see, Meta has been facing plenty of scrutiny for the failures arising to properly eradicate CSAM. Plenty of large-scale apps are told to make reportings on this front to the NCMEC and as per the firm’s latest data on this front, it’s amazing to see how Facebook rolled out 21 million reports and therefore made up the majority of reports linked to this domain. And if you include the reports from both WhatsApp and Instagram, the total comes out to be six million, so that is a staggering 86% of the majority considered.




Alarming New Study Exposes Facebook’s Intrusive Monitoring Of Users’ Online Activities

 Tech giant Meta is again under fire after a new report exposed the company’s Facebook app as invading user privacy by tracking their online activities.


The study was conducted recently by leading company Consumer Reports who proved a whopping 2,230 firms that were engaged in sharing information of users through the Facebook app.

There were also some alarming facts mentioned including how instances had 7000 firms involved who shared data amongst their users through Facebook making the ordeal easy.

The news is very shocking because it comes at a time when the concerns linked to data privacy are growing around the globe. The stats are creating serious trust issues and may end up impacting the company’s reputation as well. But in today’s markets such as the EU, data privacy regulations are going strong and continue to be at the topmost agenda for discussion.

Rules keep getting stricter and data privacy regulations in place ensure no trust issues are broken. But with such stats going strong, research like these proves that there’s a lot to be worried about and so much trust has been broken when it comes to big tech giants like Meta being involved in such behavior.

And in regions where there is even more scrutiny now than ever, it means saying hello to serious increases in legal and ethical issues.


Meta has been rolling out transparency tools of various kinds and refuted claims that it endangers users’ privacy.

Studies like these continue to highlight big issues like the unclear identity of data providers depending on what name was disclosed to users across the board. It also spoke about firms rolling out services to top advertisers so they’re able to disregard any requests where the user could choose to opt-out.

So how exactly did this research come into being? Well, the company began by collaborating with Markup to hire close to 709 volunteers who shared data archives on Facebook. This is where participants downloaded up to three-year data archives through Facebook settings and went on to submit those.


This ensured the firms could analyze all tracking services, unraveling how many firms were sharing data belonging to users via personal data transfer activities from one server to the next on Meta.

The study did reveal how many of the findings could not be representative of the American population in general. However, since the data arose from a specifically selected group of people, it’s not conclusive to represent the entire nation. At the same time, the results weren’t even adjusted depending on specific demographics as well.

Some people who took part were too concerned about their privacy and therefore weren’t technically inclined. Let’s not forget the element of bias that arises due to some members of the research firm also taking part.

So what does Meta have to say about all of this? The company mentioned how they’re giving rise to a long list of transparency tools to ensure people understand what types of data they share with the world and how it gets used.





Facebook Launches New Feature That Stops Irrelevant Content From Showing Up On Feeds

 Meta is launching new functionality that prevents users from seeing irrelevant content on their Facebook news feeds.


Meta says the new feature is designed to assist users in avoiding content that they find annoying or unappealing when they're scrolling. While the company did say that this new feature enables you to customize your entire news feed, it’s not enabling users to make the right changes. But you can help train the app’s algorithms to show content that you’re not interested in or find irrelevant.

A recent blog post regarding the matter revealed how the app will begin seeing a new tab that states, ‘show more’ or ‘show less’. These will be located right under your posts on the feed.

After the button for ‘show more’ gets selected, all such posts’ ranking scores will increase for that post or those similar to it. But when the option to show less is selected, the ranking score witnesses a decline. In this way, the algorithms of Facebook will make use of the feedback to enhance such posts across the feed.

It’s important to note how such buttons will not appear across all of the posts witnessed across a feed. But Meta does claim they could appear with time and routinely. Moreover, you may just see it pop up on a three-dot menu that’s located on the top right-hand side of a post.

Similarly, the app says they’re busy running a similar test feature across the Instagram app in Reels.

It’s worth a mention how YouTube is even offering some similar changes that will assist users in getting the content they wish to see on the platform. This way, video recommendations will get a boost. With that said, YouTube has been criticized, time after time. Their tools are called out as inefficient at keeping bad recommendations away. But who knows, this might be the change they needed.

At the same time, we’ve got some more news on how Meta is even thinking about a new option that allows users to design several profiles through the Facebook and Instagram applications, making use of the same user account.

The whole idea here is to get users to tell the difference between the various profiles, depending on the types of groups that they’re willing to connect with across so many platforms.




52% of Americans Report Seeing More Ads on Facebook

 Digital ads are the main source of income for most tech platforms, and users have been reporting seeing an increase in the number of ads that they are exposed to. 52% of Facebook users stated that they are seeing more ads than they used to, and that number is around 47% for YouTube. In spite of the fact that this is the case, most other platforms have a majority of users stating that the number of ads they are seeing has stayed more or less the same.


Facebook and YouTube are unique in that they are the only platforms that have more users reported a higher number of ads than those that say they are the same number. 39% of Facebook users and 45% of YouTube users said that the number of ads has stayed consistent, although there is clearly a far larger disparity when it comes to Facebook with all things having been considered and taken into account.

With all of that having been said and now out of the way, it is important to note that LinkedIn is satisfying its users the most in this regard, Only 20% reported an increase in the number of ads, with 64% saying that they are seeing the same number as before. Reddit, Pinterest, Twitter and Snapchat are also doing fairly well here with 54%, 55%, 53% and 48% of users respectively reporting the same ad volume. The percentage of users reporting higher ad volume for these platforms are 27% for Reddit, 29% for Pinterest, 35% for Snapchat and 37% for Twitter.

Things are a bit more dire in the world of TikTok and Instagram. TikTok performs marginally better, with 46% of users reporting the same ad volume compared to 41% reporting an increase. Instagram users are split right down the middle, with 44% saying they are noticing more ads and a further 44% reporting the same numbers as before.

Facebook’s recommendation algorithm is notorious for its low accuracy and success rate. That means that users are seeing more ads but there is only a 71% chance that the ad would be relevant.



TikTok Surpasses Facebook and Instagram Among British Users

 The rise of TikTok has been nothing short of meteoric, and while the social media platform has faced controversies related to potentially harvesting data for the Chinese government, its popularity continues to increase unabated. For example, the British legislative assembly recently suspended its own TikTok account for fear of leaking sensitive data to a foreign power, but in spite of the fact that this is the case the citizens of the island nation have been using TikTok more and more frequently as of late.


TikTok is now the best performing social media platform out there in terms of average time spent by users, with British consumers spending an average of 49 minutes on it per day. With all of that having been said and now out of the way, it is important to note that this has put it in the lead ahead of Facebook, whose average time spent fell to 43 minutes this year after reaching a high of 47 minutes in 2020.

TikTok has also turned into yet another competitor that is giving Snapchat a run for its money, turning into the third most popular social media platform in the UK after taking this position from Snapchat. Both Instagram and Snapchat have seen a muted rate of growth in terms of average user time spent, with both platforms being stuck at 29 and 25 minutes per day respectively.

It is estimated that the total number of British users who use TikTok will go up to 21.1 million by 2026, which would represent about half of all mobile users in the island nation. This is resulting in a whopping 192% increase in TikTok’s in app revenue from British users, and as it enters the social commerce arena this rate of revenue will have nowhere to go but up.

The UK is lagging behind in social commerce, which leaves TikTok well poised to introduce the concept to the market before its better established competitors such as Facebook and Instagram get the chance to. Britain is only behind China in terms of ecommerce adoption, and that creates the perfect storm for TikTok to make its entry.