OpenAI is hunting for an Insider Risk Investigator to “fortify the organization against internal security threats.”
“You’ll play a crucial role in safeguarding OpenAI’s assets by analyzing anomalous activities, promoting a secure culture, and interacting with various departments to mitigate risks,” a job ad on OpenAI’s career page reads.
For an annual salary of $140,000 – $275,000, a person will be tasked with detecting, analyzing, and mitigating potential insider threats, liaising with legal and HR departments, and implementing data loss prevention control, among other things.
The compensation also includes health insurance for a new employee and their family, a 401(k) plan, mental health and wellness support, and 18+ company holidays per year, among other benefits.
“Your expertise will be instrumental in protecting OpenAI against internal risks, thereby contributing to the broader societal benefits of artificial intelligence,” OpenAI teased.
MSPoweruser, a media outlet that first spotted the opening, said that the vacancy has been up at least since at least January this year.
The website also noted a few high-profile recent leaks at OpenAI, like ChatGPT users discovering private data, including unpublished research papers, in their chats.
Candidates applying for the position should have at least a Bachelor’s degree in a related subject, over three years of experience in insider threat analysis, and be proficient in utilizing Security Information and Event Management and User Behavior Analytics tools.
Your email address will not be published. Required fields are markedmarked