The way it is done after pressure and blasphemy from the government for technology companies to do quickly to reduce material such as terrorist propaganda.
As quoted from the BBC, the Company said that they use artificial intelligence to view images, videos and texts related to terrorism as well as a collection of fake accounts.
"We want to quickly find the terrorist content, before people in our community see it," he said.
The ability that the Islamic State says to use technology to radicalize and attract some people has raised big questions for some of the big tech companies.
They have made a response for having run the platform used to spread extremist ideology and inspire people to commit acts of violence.
The government, and Britain in particular, have encouraged more movement and some action in recent months, and throughout Europe the talk has moved towards legislation or regulation.
Among the issues seen, they say, create new legal liability for the company if they fail to remove certain content, which could include fines.
Facebook said it promised to find new rules to find and remove material and now wants to do more than just talk about it.
One response British security officials make is the extent to which companies guarantee others to report extremist content rather than act proactively.
Facebook has previously announced that it added 3,000 employees to review content marked by users.
But it also says that already more than half of the accounts issued to support terrorism is what it finds itself.
It is said also now using new technology to make better proactive work.
"We know we can better use technology and specifically artificial intelligence to stop the spread of terrorist content on Facebook," the company said.