The former chief scientist and co-founder of OpenAI has announced the launch of his own artificial intelligence (AI) company, which he said would focus on safety.

Ilya Sutskever said he was launching Safe Superintelligence and that building safe AI was “our mission, our name, and our entire product roadmap”.

In a launch statement on the new company’s website, the firm said it would approach “safety and capabilities in tandem” as “technical problems to be solved” and to “advance capabilities as fast as possible while making sure our safety always remains ahead”.

Some critics have raised concerns that major tech and AI firms are too focused on reaping the commercial benefits of the emerging technology, and are neglecting safety principles in the process – an issue raised in recent months by several former OpenAI staff members when announcing they were leaving the company.

Elon Musk, a co-founder of OpenAI, has also accused the company of abandoning its original mission to develop open-source AI to focus on commercial gain.

In what appeared to be a direct response to those concerns, Safe Superintelligence’s launch statement said: “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

Mr Sutskever was involved in the high-profile attempt to oust Sam Altman as OpenAI chief executive last year, and was removed from the company’s board following Mr Altman’s swift return before leaving the company in May this year.

He has been joined at Safe Superintelligence by former OpenAI researcher Daniel Levy and former Apple AI lead Daniel Gross – both are named as co-founders at the new firm, which has offices in California and Tel Aviv, Israel.

The trio said the company was “the world’s first straight-shot SSI (safe superintelligence) lab, with one goal and one product: a safe superintelligence”, calling it the “most important technical problem of our time”.