• Work With AI
  • Posts
  • OpenAI Enhances Internal Safety Guardrails and Gives Board Veto Power Over New Models

OpenAI Enhances Internal Safety Guardrails and Gives Board Veto Power Over New Models

Plus Sam Altman invests in an AI Productivity assistant

Today’s Highlights:

📰 News: OpenAI revamps it’s safety process and creates a new Safety Advisory group while empowering the board

💰 Funding: Shelpful raises $3M from Sam Altman.

⚡️ Top News Stories:

1. OpenAI is expanding its internal safety processes with a new "safety advisory group" and updated "Preparedness Framework" to oversee the development of AI models and mitigate potential risks.

  • OpenAI's "preparedness" team, led by Aleksander Madry, will continuously evaluate AI systems for risks, including cybersecurity and chemical, nuclear, and biological threats.

  • The internal safety advisory group will review the preparedness team's reports, along w/ reports from technical teams, and make recommendations to Altman and the board, focusing on risks. 

  • OpenAI's updated "Preparedness Framework" categorizes risks and sets guidelines for deploying models based on their risk levels, with high-risk models being restricted from deployment. 

  • The framework uses a matrix approach to evaluate risks across categories like malware creation, social engineering attacks, and dissemination of harmful information.

  • Models are scored on a scale from low to critical risk, with deployment only allowed for those scoring medium or below, and development halted for models with critical risk.

  • The board of OpenAI has been granted veto power over decisions regarding the deployment of AI models that pose significant risks. The board can withhold the release of an AI model, even if the company's leadership deems it safe, as part of new safeguards for AI development.

  • The new framework and changes aims to be a model for other AI companies and potential regulation, hoping to ensure responsible development and deployment of AI models, particularly those with potential catastrophic risks. 

2. OpenAI's shares new research in superalignment, which explores using small AI models to supervise larger, more capable ones. 

3. Google is working on a new AI assistant, Pixie, powered by their Gemini foundational model, aiming to be a more personalized version of Google Assistant and performing complex and multimodal tasks.

4. TikTok's parent company, ByteDance is secretly using OpenAI's tech to build a competing AI model, violating terms of service and breaching ethical norms.

5. Humana, a major U.S. health insurance provider, faces a lawsuit for allegedly using an AI model w/ a 90% error rate to override doctors' medical judgments and deny care to elderly people.

6. Scientists from MIT, University of California, and AI company Aizip have developed a process where large AI models can autonomously build smaller cost effective models.

7. Since May, there has been a 1,000%+ increase in websites disseminating AI-created false information about elections, wars, and natural disasters.

8. Salesforce is boosting its AI capabilities w/ Vector Database support and enhanced Einstein Copilot.

9. Krutrim, an AI startup has launched India's first multilingual LLM, capable of generating text in 10 Indian languages.

10 Meta, Google, Microsoft, and OpenAI are racing to apply advanced AI that understands images and language to wearable tech.

11. Life2vec, an AI model trained on data from 6M people in Denmark, has been found to predict the likelihood of death more accurately than existing models used in the insurance industry.

💰 Top Funding News:

1. Shelpful, an AI Productivity Assistant (HabitGPT) combined w/ real-human Accountability Coaches ('Shelpers'), raised a $3M Seed Round from Apollo Projects, led by Sam Altman.

2. Redactable, an AI-driven web app designed to redact sensitive documents quickly and permanently, raised a $5.5M Seed Round led by Gradient Ventures w/ Wocstar Fund and others.

3. Distributional, a modern enterprise AI testing and evaluation platform to address the risks associated w/ AI in business applications, raised an $11M Seed Round led by Andreessen Horowitz, w/ Operator Stack, Point72 Ventures, SV Angel, Two Sigma, and Willowtree Investments. 

4. Deep Apple Therapeutics, an AI-driven virtual screening platform for drug discovery, raised a $52M Series A led by ATP.

5. Totus Medicines, a small molecule drug discovery and development company using covalent libraries and AI tools, raised a $66M Series B led by DCVC Bio.That's all for today's email! If you want more please follow us at the social channels linked below, or check out our website!

How'd you like today's email?

Login or Subscribe to participate in polls.

Share our newsletter: If you like our work please share/forward this email with your friends, colleagues, and family. It's the best way to support us!

If this email was forwarded to you please sign up here to continue receiving them.

Want your content, product, jobs, or event featured in our newsletter? Reply to this email with the details, and our team will reach out to you.

Do you use AI for work? Tell us how, and you could be featured in our newsletter!

Check out our website for more resources, including a list of AI investors, products, events, and twitter follows.

For an archive of all our posts, click here.

We'd love to hear from you! You can always leave us comments or feedback by replying to this email!

Powered by AI. Curated and edited by Humans.