Showing posts with label Meta. Show all posts
Showing posts with label Meta. Show all posts

Meta’s Oversight Board Expands To Include Threads

 Meta’s Oversight Board has just announced an expansion plan to include Threads in its purview.


The announcement was recently made about how Threads users would be given the chance to make appeals against Facebook’s parent firm’s decisions having to do with moderating content online. This would provide this independent group the chance to impact policies for the new app that comes under Meta’s ownership.

It’s a serious expansion for Meta’s Oversight Board. The latter has now weighed up against any issues linked to Facebook and Instagram’s content published online. This change would give users the chance to have greater accountability that was independent, early on.

As per statements mentioned by the Oversight Board, users making appeals across Threads are going to see it work very much like how it is on Meta’s other two apps, Instagram and Facebook.

After exhausting the whole internal system on Meta, they can see a small glimmer of hope through reviews requested by Meta’s Oversight Board.

Under such rules created during the board’s formation, Meta would now be forced to make the board’s decisions come into effect with particular posts. But at the same time, it should not feel pressured or obligated to stick to such policy recommendations.

Including content moderation of Threads under its belt, such decisions are clear proof of how there’s a growing influence of the platform that works very much like Twitter, despite being rolled out last year in the summer.

So far, Mark Zuckerberg’s platform has a user base featuring close to 130 million users and Zuckerberg speculated some more ambitious targets where it might just go up to one billion soon. But only time can tell how true his prediction proves to be.

When we take a closer look into this matter, Threads has similar rules to Instagram. Meta has similarly encountered such pushback from a wide number of users working over the policies that include content recommendations.

For now, we’re seeing Threads bar certain terms linked to the COVID-19 pandemic and issues of that kind which it deemed to be sensitive. As one would expect, it raised eyebrows.

It similarly shocked some critics when it delineated how accounts promoting political content would no longer be recommended to users on the feed. The only way that would be possible is if the users themselves opted into making such kinds of suggestions.

When you look at things more officially, Threads has rules quite similar to Instagram. However, Meta says it continues to face a lot of pushback from plenty of users in regards to its policies outlined for content recommendations.

Whether or not the board weighs in on such decisions, only time can tell. It’s going to take some time before we see users on Threads witness any kind of changes due to recommendations made by the board.

Meta’s Oversight Board only attains a small figure of appeals generated by users. It might take a few weeks or even months for such groups to produce a decision and it may take even more months for the tech giant to make amendments to any regulations that it already has in place, thanks to the guidance being taken on.

Remember, Meta’s Oversight Board can expedite the whole process in certain cases so gone are the days when one would expect the whole endeavor to take a really long time before coming into play.

For now, we think the Oversight Board of Meta has its hands full because an app like Threads is certainly not the simplest to handle, especially in terms of content regulation.

Threads users now have the opportunity to appeal Meta's content moderation decisions through the Oversight Board.

Tech Giants OpenAI, Meta, Google, And Microsoft Unite In Effort To Fight AI Election Deepfakes

 The rise in deepfake images continues to go strong and that’s one reason why tech giants are concerned as the election period is set to take center stage in the US and around the globe.


Plenty of people were seen expressing serious concern on this front and that’s probably why Google, Microsoft, Meta, and OpenAI have united on this front. The news is even more concerning now than ever as AI-generated images of the leading star Taylor Swift started to flood across the popular X social media network.

Some reports went on to delineate how pictures produced on this front were done through Microsoft’s popular AI picture generator called Designer.


Remember, this year is the year where the US is set to get a new head of the nation and there happens to be even more concern linked to AI deepfake pictures being used negatively to influence voters throughout the election period.

This is why tech giants feel it’s better now than ever to make use of the best resources to combat using AI technology in a deceptive means for the election period.

This agreement was rolled out at this year’s Munich Security Conference and was dubbed AI Elections Accord. The firms on board with the agreement included a press release that stated how the above-mentioned companies would now follow a set of regulations to combat misinformation taking center stage.

The press release stated how a leading number of organizations were working toward fighting against election efforts like deepfakes. They hope to roll out great technology that limits the risks on this front while also working toward using open-source tools when and if it’s deemed appropriate.

They similarly hope to work towards detecting content distribution on these apps and assess the model to understand any risks attached in terms of deceiving election content. Lastly, they hope to offer greater support to ensure public awareness is intact, not to mention great media literacy and leading resilience.
The head of Microsoft was among the leading executives who stated in a new press release how they hoped to embrace AI benefits while speaking of a great responsibility to ensure tools do not begin weaponized throughout the election period.

The head of Microsoft happened to be among all the company executives who stated inside press releases how this was more necessary now than ever as the goal right now is to ensure deception flourishes.

Another stark example was put into the limelight including how a robocall featuring the voice of US President Joe Biden took center stage where he urged him to voice in the New Hampshire region. After a prompt investigation on this front, it was mentioned how calls were soon found to be produced through the likes of a firm based in Texas and therefore were made using AI technology.

Image: Digital Information World - AIgen

Meta In The Hotseat As EU Watchdogs Express Concern Over Ad-Free Subscription Model

 Tech giant Meta is facing great scrutiny from watchdogs located in the EU.


The news comes after the rollout of ad-free subscriptions took center stage in 2023 but EU regulators feel the thought of the ‘pay or ok’ initiative is just worrisome and they’re making sure Meta knows how they feel about this.

In the recently published joint letter, several watchdogs from that part of the globe had people talking more on this front and speaking against the model for ad-free subscriptions.

The leading concern is that there is a $14 monthly fee attached which Meta would take to ensure user data was secure and privacy was maintained at all times. The firms who are expressing concern include those hailing from Norway, Germany, and the Netherlands.

There were plenty of concerned parties who felt Meta’s Pay or Okay policy might be too expensive and therefore they would find it hard to say no to the option that allows them to reject the offer.

The document similarly goes on to point out how news publishers have little income as a major share of profits stays with the advertisers themselves.
Given the results of this pay or okay policy, we’ve seen how it’s not too popular with experts either. Many companies also feel that the button for acceptance automatically is deemed to be illegal. But the thought of charging a fee for rejection of the option isn’t.

There is also a great mention of how results from this pay or okay front are even greater in regards to consent as close to 100% failed in seeing how charging thousands of euros for clicking the rejection button is legal when compared to the option for rejection.

Paying out a fee also means families are speaking close to $38,000 each year if 35 applications on their devices follow this pay-or-okay model for business strategy.

The subscriptions in place for no ads address matters like the latest in terms of regulatory developments and guidance as well as judgments shared by top-of-the-line regulators in the EU and the courts over the years.

Meta does not agree and says its goals are in line with the laws arising in the EU and also with the stringent Digital Markets Act but 28 top companies feel it robs users of real or free-of-choice decisions of how data is utilized.

Image: DIW

Meta Opens Up Greater Access To Its Data To Keep Voters Informed Of Political Shifts

 Tech giant Meta is gearing up for the upcoming US Elections phase as well as crucial voting polls all set to take center around the globe very soon.


The company just mentioned how the goal was to provide a greater number of data to the masses for the sake of academics and to ensure people stayed well aware of what was taking place in their surroundings.

As days go by, this would be designed to ensure voters remain well aware of what’s taking place in terms of remaining aware of happening in the world of politics so they can be more informed and make better decisions along the way.
Remember, Meta has already been criticized multiple times for enabling misinformation to spread across its apps for a while now, and it seems to peak, each time the elections take place.

The goal is to ensure the best understanding of any political shifts arising as we speak because things keep on fluctuating and people need to be aware of what’s taking place.

After this new initiative was rolled out in the name of a partnership with the COS in the past month, we saw how the tech giant made way for greater content from a series of public figures that were accessible to those carrying out research.
This was done to make sure they met the growing impact linked to Facebook and Instagram activity that was linked to society, cultural norms, and even politics.

As rolled out by tech giant Meta, the next few weeks were surely going to be crucial for the firm and its host of apps that are popular with many around the globe. They can now download all kinds of content posted online, thanks to the growing rise in public figures and famous personalities. This data would be accessible to those downloading CSV formats via the user interface and wouldn’t need any access via virtual rooms.
This update would give rise to more content being published on social media for public discourse. It would be crucial to carry out opinion shifts in terms of how public figures can impact people’s behavior.

In a similar context, tech giant Meta is also launching its Data Protection Assessment questionnaire so that developers can benefit and it’s used for vetting applications to ensure data access.

200,000 Private Records From Facebook’s User Database Stolen, Hackers Forum Confirms

 A hacker has just raised the alarm amongst Facebook users after confirming that 200,000 personal data record entries were reportedly stolen from the firm’s database.


The news is alarming for obvious reasons as the claims further went on to delineate how the cybercriminal dubbed ‘alogoatson breached contractors’ efforts that are in charge of Facebook’s cloud services. They stole part of the user database that featured a significant number of entries.

The information was rolled out by a leading threat actor dubbed ‘IntelBroker’ which is notorious for a long list of leaks that entailed data stolen through General Electric and a long list of high-profile attacks taking place.
This sample entails lists featuring full names, profile image links, and hashed passwords. Other than that, profile ratings, settings, and plenty of reviews were on display.

The hacker explained how the data that was compromised included the likes of Physical IDs.

This database was first rolled out in February and has close to 24k email IDs and a host of other compromised information. Media outlets tried to request tech giant Meta for more comments on this front but there is no response so far.

This is clearly not the first time that we’ve seen such measures take center stage where a firm like Facebook has become the center of attention in a long list of data leaks. We saw in 2022 how a database from the same tech giant featuring data records of close to 533 million users on Facebook went public online without any additional costs attached.
For a while now, the company has been slammed for enabling third parties to gather data belonging to users as was seen in the high-profile and infamous Cambridge Analytica scandal.

The danger is massive and cannot be ignored because it involves a large number of private data getting leaked that could potentially impact the lives of millions. So as you can see here, there’s a lot at stake.

So many threat actors managed to collect data for matters like phishing attacks, malicious attempts, and convincing attacks against a host of individuals where data was exposed.

Media outlets continues to update the top-of-the-line data leak checker to entail data arising from several different leaks. For this reason, warnings are generated so users continue to remain vigilant at all times and ensure top-level privacy and security with passwords that are not easy to break into.

Photo: Digital Information World - AIgen

H/T: Bleepingcomputer / Cybernews

Meta Considers Revisiting Its Hate Speech Policy After Massive Concern Over ‘Zionist’ Terminology

 Tech giant Meta is under pressure after greater concern grew surrounding its respective hate speech policy.


Many users were wary about the terminology linked to zionists and how it was being used for posts linked to the Arab and Jewish communities. 

The policy currently enables the use of the term in a political discourse. Still, it was removed when it had to do with Jews or Israelis directly, especially when it was in the context of a violent or dehumanizing manner. This was just confirmed through an email generated by a Meta rep who mentioned that it planned to invite others to discuss the matter in the future, as first spotted by TheIntercept.

Meanwhile, the email further mentioned how the firm considered reviewing this in context to posts and concerns of users who were the real stakeholders on this front.

This means we might soon be witnessing a possible change in policy while advocacy groups (including MPower Change and 7amleh) were seen questioning how these policies were getting enforced, and that entails whether the posts were made by the platform’s algorithm or humans.

For a while now, we’ve seen Meta be blasted for rolling out measures that were unfair as all pro-Palestinian content was getting censored as mentioned by one Meta rep.
The groups rolled out questions regarding such policies and how they would get enforced for detection and censoring of this kind of language.

The AI-based systems are designed to flag all posts deemed problematic. Right now, there’s no kind of human review and such firms were the ones who happened to be in attendance during the meeting taking place with tech giant Meta.

Right before that meeting took center stage, a whopping 73 different firms generated a letter to the company that added how extensions generated to the policy could mischaracterize chats regarding zionists. They would treat that as proxies and would encourage acts linked to Israel as well as antisemitism.
The move was designed to stop Palestinians from rolling out daily experiences from the world.

During the meeting, Meta shared examples of how plenty of posts would soon be removed and they included posts where zionists were dubbed rats.

Such kinds of decisions regarding content moderation would not include track records that were reliable and had to do with Palestinian protection and speech.

In this letter, the firms expressed serious concern about the lack of replies to the rise in censorship of content generated in favor of Palestinians. This has been at an all-time high for quite some time now.

The proposal is ineffective in combatting measures like antisemitism. It ignores issues fueled by the likes of Palestinian oppression during a time when so many courts and experts in the field of human rights would accept that something as severe as genocide is taking place in the Palestinian region.

“There is a real danger that such policy revisions would stifle free expression of voices speaking out against the Israeli government’s systematic violations of Palestinian rights, and its ongoing onslaught in Gaza, where a real and imminent risk of genocide looms large.”, expressed Alia Al Ghussain, Researcher and Advisor on Artificial Intelligence and Human Rights at Amnesty Tech.

Meta's policy allows the term "Zionists" in political discourse but removes it when linked to Jews or Israelis in a violent context.
Photo: Digital Information World - AIgen/HumanEdited

Meta Changes How Instagram and Threads Handle Political Content

 Meta is changing the way Instagram and Threads show political content. The company wants to avoid making Threads like Twitter, where political debates can get very heated. Now, Instagram and Threads won't "proactively" show users political posts. This is similar to what Meta already does on Facebook. It has cut down on political content in different places like the News Feed and video suggestions.


Image: Digital Information World

Meta plans to bring these changes to Instagram and Threads as the 2024 U.S. elections get closer. This means less political content in Instagram Reels and the Explore section, as well as in the main feed of both Instagram and Threads.

Threads is trying to be different from Twitter, avoiding news and political debates. Even though Threads delayed adding a trends feature, Meta doesn't want to push news content there.

Meta's new rules affect how Instagram suggests posts to users. But, if someone follows an account that shares political content, they will still see those posts in their feed and stories. It just means those posts won't be suggested to people who don't follow the account. Instagram will let professional accounts check if they can be suggested and change their content if they want to be.

Users who like political content can choose to see it in their settings on Instagram and Threads. Facebook will have a similar option later.


Meta is making these changes slowly. They want to be careful after facing criticism for spreading hate and misinformation in the past. This could also help with lawmakers who are thinking about how to handle big tech companies.

Meta Lays Down New Rules For Greater AI Transparency Surrounding Its Apps

 Tech giant Meta is on the rise to roll out a greater sense of transparency surrounding its platforms.


Facebook’s parent firm says it hopes to implement a greater number of rules that give rise to more transparency as we speak while others would be linked to detecting AI usage through technical measures.

We agree that it won’t always be possible as there are plenty of options up for grabs and a lot of options available today for subverting digital watermarks can be done with ease.
But tech giant Meta says it hopes to generate the latest industry standards on this front that has to do with AI detection. Facebook’s parent firm hopes to collaborate with a series of other providers in the industry to ensure AI transparency and create the right working rules that highlight such ordeals taking place online.

Tech giant Meta says it’s on the rise to creating several tools that can identify which markers are invisible and which are not.


This means labeling pictures from all kinds of platforms like Google, Shutterstock, Adobe, Microsoft, and even Midjourney. All the measures used for AI detection will allow Meta and a host of other apps to generate labels on content made through generative AI so everyone is well-informed about what they’re seeing or reading online.


This would help in limiting misinformation spread online that comes due to AI and while there are limitations regarding this capacity across the AI sector, it cannot be ignored.

The news comes at a time when we’re seeing some of the world’s top firms generate labels of images made through AI and those curated through humans so people are well aware of what’s taking place.

It’s a key concern linked to AI development and has to do with experts generating a lot of concerns on this front for years. New generative AI tools like ChatGPT are certainly a major innovation when it comes to technical advancements. That is why a more cautious approach needs to take center stage so any harm and risks to the general public regarding its misuse must be made aware before it’s too late.

As it is, we’ve been seeing so many tools cause problems in various contexts like the elections. But with greater transparency and image labeling, Meta feels the addition of such ordeals can ensure AI is detectable easier than before.

There are plenty of safeguards being generated on this front and as search engine giant Google confirms, the need for such tools to be deployed at an earlier stage than before is the need of the moment.

Remember, more technical shifts and greater regulation can set the stage for greater management than before. It might take a few years but with the right tools in place, the tech world can overcome the issues and loopholes that many of us fear taking place right now.

Zuckerberg Faces Tough Questions in Senate Over Meta's Role in Child Safety

 Mark Zuckerberg, the Chief Executive Officer of Meta, expressed his heartfelt apologies during a Senate session on online child safety topic, acknowledging the distress experienced by parents who attributed their children's tragic outcomes to Instagram. Senator Josh Hawley's inquiry prompted Zuckerberg's candid response, "I’m sorry for everything you’ve all gone through. It’s a terrible ordeal, and no family should endure the hardships yours have faced."


The Senate Judiciary Committee convened the hearing, titled “Big Tech and the Online Child Sexual Exploitation Crisis,” where Zuckerberg, alongside the CEOs of TikTok, Discord, X, and Snap, faced a barrage of queries from lawmakers. Holding snapshots of their children, parents confronted the tech leaders, donning blue ribbons advocating the "STOP Online Harms! Pass KOSA!" initiative, urging the enactment of the Kids Online Safety Act.

Upon Zuckerberg's entrance, audible disapproval emanated from some parents, underscoring the intense scrutiny Meta has faced concerning child safety issues on its platforms. While addressing parents, Zuckerberg's words weren't confined to the microphone but resonated on a livestream. Post-apology, he assured parents of ongoing efforts, emphasizing, "This is why we invest significantly and will persist in industry-leading endeavors to ensure that no one has to endure the hardships your families have faced.”

Throughout the hearing, Zuckerberg confronted rigorous questioning, notably about nonconsensual explicit content, drug-related fatalities linked to Meta's platforms, and various other concerns. Meta grapples with a federal lawsuit from numerous states, alleging intentional creation of "psychologically manipulative" features on Facebook and Instagram, concealing internal data that reveals harm to young users.
Senator Richard Blumenthal highlighted emails purportedly received by Zuckerberg from Meta’s global affairs director, Nick Clegg, indicating concerns about well-being topics such as problematic use, bullying, harassment connections, and suicidal self-injury. Clegg, a former deputy prime minister of the UK, communicated that Meta’s safety efforts were constrained by insufficient investment.

Senator Hawley referred to a 2021 Wall Street Journal investigation revealing Meta's awareness of Instagram's detrimental impact on teenagers' mental health. Zuckerberg contested Hawley’s presentation of these details as “facts” and claimed selective interpretation of the research.

Responding to questions from Senator Welch about layoffs in the trust and safety departments, Zuckerberg clarified that Meta's layoffs were not sector-focused. Senator Tillis emphasized a balance between the executives' humanity and their corporate responsibilities, encouraging continuous efforts to mitigate the negative impact of their platforms.

Zuckerberg disclosed to senators that Meta employs 40,000 individuals in its trust and safety division. The hearing underscored the ongoing challenges faced by major tech companies in balancing innovation with the responsibility to protect users, particularly the vulnerable demographic of children and teenagers.

Photo: United States Senate Committee on the Judiciary

Note: Content in this story is written using AI and edited.

Adam Mosseri Trending Topics Not a Priority for Threads, Here’s Why

 Threads has been making a lot of waves recently, with many saying that it is the platform that is most likely to supplant Twitter, now known as X, as the most popular microblogging platform out there. One of the many things that the social media platform is trying to incorporate is trending topics, but in spite of the fact that this is the case, CEO Adam Mosseri doesn’t think that this should be all that big of a priority.


According to him, trending is just one of many features that won't really help the platform grow as much in the long run with all things having been considered and taken into account. Hashtags, following feeds, edit buttons and like lists were also mentioned in his statement. You might think that trending topics in particular would be excellent for engagement, but in spite of the fact that this is the case, Mosseri doesn’t seem to think that they would have all that measurable of an impact.

With all of that having been said and now out of the way, it is important to note that X derives a lot of its value from features like the ones mentioned above, all of which create more engagement than might have been the case otherwise. Trending topics also go a really long way towards keeping users hooked, and with lists, users can discover content that pertains to their precise and specific interests.

This just goes to show that the assumptions we often have about the manner in which social media platforms function might not always line up with the reality of how things work on the ground. The point here is that Meta’s priority has been to figure out what users want through algorithms, and this has yielded a 20% increase in Reels usage. It might not be a stretch to say that users don’t actually know what they want, and that while certain features might boost engagement, people like Mosseri that actually see the data coming in would have a better idea of how to make Threads reach its full potential.

Photo: Digital Information World - AIgen/HumanEdited

Meta’s Lawsuit Against Bright Data Just Got Hindered By This Court Ruling

 Meta filed a lawsuit against the Israeli tech firm Bright Data last year, claiming that it was illegally scraping data from Facebook as well as Instagram. The suit alleged that the data harvesting practices that Bright Data was taking part in were a violation of Meta’s terms of service, but in spite of the fact that this is the case, a court has ended up ruling in the Israeli firm’s favor. This ruling stated that Meta has failed to provide the appropriate amount of evidence to back up the claims that it is making at this current point in time.


With all of that having been said and now out of the way, it is important to note that Meta itself has utilized Bright Data’s services in the past. It hired the firm to scrape e-commerce data from various websites, and this practice continued right up until the lawsuit ended up being filed. It bears mentioning that the firm might be collecting data from minors which would constitute a legal violation, although this isn’t part of the court proceedings that are going on right now.

Bright Data’s retort to Meta’s allegations is that it only participated in scraping publicly available data, and since Meta hasn’t yet provided sufficient proof of non-public data being acquired, this lawsuit might not go where the tech company wants it to. The evidence that Meta did bring to the table consisted of a data set containing 615 million Instagram data records, which Bright Data was selling for around $860,000. However, Meta failed to prove that this data could only be accessed by entering log in credentials in any way, shape or form.

Other claims included Meta stating that Bright Data bypassed CAPTCHAs to get to the data, but the court refuted this by saying it’s not quite the same as bypassing password protection. It remains to be seen where the case will go from here on out, but for now Meta is having a hard time proving that Bright Data did anything to violate the terms of use, although this might change in the future.

Court rules Meta lacks evidence in claims against Israeli firm Bright Data over data scraping practices.

Meta’s Automated Moderation Is Raising Serious Concerns With Its Oversight Board After Controversial Instagram Post Allowed

 Using automated moderation means to handle a wide array of controversial decisions across Meta’s apps is being questioned by its own Oversight Board.


The news comes after the tech giant was slammed for leaving a post regarding the Holocaust denial on Instagram that many found shocking, including the board itself. As per the firm’s policies, such acts of denial are considered hate speech.
This questionable post showed a SpongeBob rolling out ‘true facts’ about the Holocaust and many were appalled at how it was still there, despite being so untrue or misrepresenting the real facts from history.

So many users mentioned that over the past several years, the post keeps popping up, and the fact that we’re seeing this since 2020 and Meta doing nothing about it has raised questions.

The system keeps claiming that it did not violate any rules of the company so they closed such cases through automated means.

Last year in May, we saw one person generate an appeal about how Meta’s choice to leave offensive posts on the app is just questionable and bizarre and why Instagram was allowing it. But again, it was ignored as they had other policies like COVID-19 automation in place so that’s when Meta’s Oversight Board was contacted.


A new assessment linked to the Holocaust denial was getting generated through Meta’s apps and that’s when it was revealed how the famous Squidward meme kept on being used for spreading narratives that were antisemitic in nature. It also mentioned how some of the apps’ users were preventing themselves from getting detected so they could spread fake denial content left and right.

They were doing this through smart strategies in place like denying the Holocaust by using alternate spellings of certain terms as well as cartoons of jokes.

They are very concerned about Meta’s acts in this regard, the board added and it hoped that the company would put an end to automated policies arising in May of last year after a wide array of conditions tried to justify the wrong act.

It just feels it’s no longer effective and useful and therefore must be removed for this reason. And the fact that human reviewers cannot label offensive content as the Holocaust denial so they are filtered into a bracket called hate speech.

The board asked to have more data on this front including the chance to keep hate speech enforcement at the top of the list through policies as it relies greatly on the use of AI tech for moderating content.

So now, the Oversight Board wishes for Meta to take the right technical steps to ensure it can measure the accuracy of forcing denial content regarding the Holocaust. This entails getting more kinds of granular data.
The board asked if Meta could validate transparently if it stopped all of its automation policies that were put into place during the start of the pandemic. It rolled out recommendations about Meta’s technical steps that should be considered to ensure measures are accurate and that it was enforcing the Holocaust denial there and then.

This includes a gathering of a lot of granular information, the board added which it feels needs to stop for Meta’s own better interests.

When requests were generated for comments, Meta explained to Engadget how there was a formal response generated in terms of transparency. The company says it did leave out all offensive posts so that they couldn’t be published but perhaps this was done in error. Thankfully, they did accept the mistake, if that’s what it feels it might be.

During that time, it vowed to get to the bottom of the matter and figure out what really went wrong. It would now be rolling out a review and comprehensive investigation on this front after taking parallel context into consideration.
If they feel that action has to be taken more stringently, then they will do so immediately. But for now, they’re good to go, and hope to review the matter in detail and issue updates on this front too.