Information Risk Analyst - AI
Location: Portland, OR (Remote or Hybrid)
Duration: 12 months
**Client's preference is for local candidates who can go to the office once a week. Remote is also acceptable. Candidate must be on our W2.**
Job Overview:
The AI Information Risk (IR) Analyst will be responsible for identifying, evaluating, and mitigating risks associated with the use of generative AI technologies in our client's organization. This role requires a blend of technical expertise, analytical skills, and strong ethical judgment to ensure that our AI initiatives are both effective and compliant with regulatory standards.
Currently there are 5 IR team members who perform risk assessments for client applications and projects. Standard applications are used in daily tasks . The Analyst will be supporting the analysis of AI products (Anthropic Model Claude 3 and Microsoft Azure GPT 4) and client's applications (including a Generative AI application implementation with sub bots serving specific business areas). The above-mentioned models are used within client's internal virtual private cloud with no access to internet. A Security Analyst or Information Risk Analyst with exposure, training, and maybe 1-2 projects could meet this need. They will be working with the project team analyzing use cases for AI implementation. The Analyst will embed with the client's internal project team daily to understand the security needs for the upcoming roadmap items and will be a liaison to communicate any security requirements.
Key Responsibilities:
- Conduct comprehensive risk assessments of generative AI systems, identifying potential vulnerabilities and threats.
- Collaborate with cross-functional teams to design and implement robust AI security measures.
- Ensure compliance with relevant legal, regulatory, and industry standards related to AI and data privacy.
- Develop and maintain risk management frameworks and policies specific to AI applications.
- Perform regular audits and assessments to monitor the effectiveness of risk mitigation strategies.
- Provide expert guidance on AI ethics and responsible AI practices.
- Prepare detailed reports and presentations on risk assessment findings and recommendations for senior management.
- Stay updated with the latest advancements in AI, cybersecurity, and risk management practices.
Education/Experience Requirements:
- 5-10 of risk experience, 2 years AI
- CIAP Certification (nice to have)
- Any AI certifications can serve as experience in lieu of degree
Top 3 Must-Haves (Hard and/or Soft Skills):
- Understanding of AI/ML concepts, algorithms, and models.
- In-depth familiarity with cybersecurity principles and practices.
- Understanding of IT infrastructure, cloud platforms (AWS, Azure, Google Cloud), and their security protocols.
Top 3 Nice-To-Haves (Hard and/or Soft Skills):
- Ability to think analytically and critically about potential risks.
- Ability to work effectively with cross-functional teams, including data scientists, IT professionals, legal advisors, and business leaders.
- Strong sense of ethics and responsibility regarding AI deployment and its societal impacts.