- Work With AI
- Posts
- Stanford study finds major AI models significantly lack transparency
Stanford study finds major AI models significantly lack transparency
Oct 20, 2023
Made with Midjourney. Prompt: ‘an ominous robot standing behind translucent frosted glass with its hand pressed against the glass --ar 16:9 --v 5.2’
Today’s Highlights:
📰 Top Stories: A Stanford study found that major AI models significantly lack transparency + DALL-E 3 is available to ChatGPT Plus and Enterprise users
👀 Content: AI chatbots can guess your personal information from what you type + How artificial intelligence will affect our children’s future
💰 Funding: Zordi raised $20M to build autonomous greenhouses equipped with AI and robotics
⚡️ Quick News Hits
US export rules block NVIDIA from selling AI chips to China as the company partners with Foxconn to build advanced "AI factories" for various applications, including autonomous vehicles and robotics.
NVIDIA enhances generative AI capabilities with TensorRT-LLM support for Windows, aiming to accelerate LLMs on its GPUs and improve efficiency.
PwC teams up with OpenAI to leverage AI for tax, legal, and HR advice offering clients guidance and assistance in completing complex tasks.
YouTube is in talks w/ major music labels for rights to develop an AI-powered tool that replicates the voices of famous musicians during audio recordings.
Figure, the AI Robotics startup building practical general purpose humanoid robots, released video of it’s robotos waking.
EU proposes three-tiered regulation for generative AI models, aiming to set rules and transparency standards.
Amazon begins testing Agility's bipedal robot Digit for possible warehouse applications.
Google DeepMind unveils UniSim, a ML model for creating realistic simulations of interactions between humans, robots, and agents
Amazon, Glassdoor, and travel companies unite in the Coalition for Trusted Reviews to combat the proliferation of AI generated fake online reviews.
Researchers at Meta develop Image Decoder, a system that can visualize a person's thoughts or what they are seeing based on brain activity.
IBM Consulting and AWS collaborate on generative AI solutions for contact centers, cloud value chains, and supply chains.
📰 Top Stories
(Source: Stanford HAI)
TLDR: A Stanford University index reveals that major AI companies, including OpenAI, Meta, Google, and Amazon lack transparency in disclosing key details about their AI models.
The index rates companies based on whether they disclose information like training data sources, wages for workers involved, and environmental impact. The study evaluated 10 AI systems, finding that none achieve more than 54% transparency across various criteria. Meta's Llama 2 is considered the most open, while Amazon's Titan Text is the least transparent.
The report suggests that competitive reasons don't justify this level of secrecy and that increased transparency could benefit the field without harming competitiveness. Secrecy in AI development has led to concerns about accountability and the potential misuse of AI technology, especially in areas like criminal justice and healthcare.
The Big Picture: The lack of transparency in AI development raises questions about accountability and stifling scientific advancements in the field. Striking a balance between proprietary interests and open collaboration is crucial for the future of AI and ensuring that it remains a scientific discipline rather than a profit-driven business.
(Source: OpenAI)
TLDR: OpenAI has expanded access to its DALL-E 3 image generator, offering it to ChatGPT Plus subscribers and ChatGPT for Enterprise users.
DALL-E 3 enhances image generation with intricate details, text, hands, faces, and versatile aspect ratios, making it useful for marketing and branding.
Users can interact with DALL-E 3 within ChatGPT, enabling real-time adjustments to images.
OpenAI is also working on a provenance classifier, aiming for 95-99% accuracy in identifying DALL-E 3 generated images, a tool to combat AI-driven disinformation.
(Source: World Health Organization)
TLDR: The World Health Organization (WHO) has published guidelines for regulating artificial intelligence (AI) in healthcare.
The guidelines highlight the importance of AI system safety, effectiveness, and accessibility for those who need them.
AI could enhance health outcomes by improving diagnosis, treatment, and medical knowledge, especially in areas with a shortage of specialists.
Concerns include unethical data collection, cybersecurity threats, and bias amplification.
WHO recommends six key areas for AI health regulation, including transparency, risk management, data quality, and collaboration among stakeholders.
Big Picture: As AI continues to transform healthcare, these guidelines address the critical need for ethical and effective regulation to harness AI's potential while safeguarding privacy and minimizing biases and risks.
(10 min read) (Source: Nature)
TLDR: IBM has developed the NorthPole processor chip, designed to enhance artificial intelligence (AI) performance while significantly reducing energy consumption.
NorthPole's innovation eliminates the need for frequent external memory access, addressing the Von Neumann bottleneck, which typically slows down AI and results in energy inefficiencies.
The chip consists of 256 cores, each with its own memory, mitigating the bottleneck within each core and achieving exceptional energy efficiency.
While it's particularly efficient for image recognition, it may not be suitable for large language models like ChatGPT.
This breakthrough highlights AI's potential to become faster and more energy-efficient, impacting various applications, including self-driving cars.
(Source: Google DeepMind)
TLDR: Generative AI systems like language models have wide-ranging applications but can pose ethical and social risks.
Google DeepMind proposed a three-layered framework for evaluating these risks: assessing AI system capability, human interaction, and systemic impacts.
Context is crucial in evaluating AI risks, including how the technology is used and its broader societal implications.
Responsibility for safety evaluations falls on AI developers, application developers, public authorities, and broader stakeholders.
Current safety evaluations for generative AI have gaps in context, risk-specific assessments, and multimodality considerations.
Big Picture: Responsible development and deployment of generative AI systems require comprehensive evaluations that consider not only the technology's capabilities but also its interaction with users and its systemic impacts on society. Collaborative efforts involving various stakeholders are essential to ensure the safety of these AI systems in an evolving landscape.
👀 Interesting Reads and Content
Deep Dives
An Industry Insider Drives an Open Alternative to Big Tech’s A.I. (The New York Times)
The generative AI boom runs through this behind-the-scenes startup (Semafor)
Google’s ‘Wartime’ Urgency to Chase ChatGPT Shakes Up Culture (The Information)
DeepMind Wants to Use AI to Solve the Climate Crisis (WIRED)
Insightful Information
How to Capitalize on Generative AI (Harvard Business Review)
OpenAI Dropped Work on New ‘Arrakis’ AI Model in Rare Setback (The Information)
Your Personal Information Is Probably Being Used to Train Generative AI Models (Scientific American)
Using AI, cartoonist Amy Kurzweil connects with deceased grandfather in 'Artificial' (NPR)
AI Chatbots Can Guess Your Personal Information From What You Type (Wired)
Analysis and Critiques
Living guidelines for generative AI — why scientists must oversee its use (Nature)
Why it'll be hard to tell if AI ever becomes conscious (MIT Tech Review)
Opinion | Time to stop AI from stealing writers’ words (The Washington Post)
How artificial intelligence will affect our children’s future (Vox)
💰 Funding News
1. Hayden AI, a company utilizing AI and geospatial analytics to develop data-driven intelligence applications for governments and businesses that improve traffic management and sustainability, raised an oversubscribed $53M Series B led by the Drawdown Fund, a growth equity firm dedicated to addressing climate change drivers.
2. Nirvana, a commercial insurance company revolutionizing the industry with AI and ML by leveraging IoT data for personalized, cost-effective coverage for fleets, raised a $57M Series B led by Lightspeed Venture Partners, General Catalyst, and Valor Equity Partners.
3. Procurify, a company offering an Intelligent Spend Management platform that helps organizations consolidate and streamline their procure-to-pay workflows with AI, raised $50M Series C led by Ten Coves Capital and Export Development Canada (EDC).
4. Evident Vascular, a medical tech startup focused on developing an advanced intravascular ultrasound (IVUS) platform that utilizes AI to enhance imaging and streamline workflows in vascular interventions, raised a $35M Series A led by Vensana Capital.
5. Zordi, an ag-tech startup specializing in autonomous greenhouses equipped with AI and robotics to deliver premium fresh produce to urban areas, raised $20M in funding led by Khosla Ventures.
6. Mind Foundry, an AI startup that uses AI tools to detect cognitive decline in older drivers and assist insurers in predicting and preventing accidents, raised a $22M Series B from Aioi Nissay Dowa Insurance Co., Parkwalk Advisors, and the University of Oxford.
7. Overstory, a climate tech startup that uses AI-based products to help mitigate wildfires and improve natural resource management by providing insights into vegetation, raised a $14M Series A led by B Capital w/ The Nature Conservancy and other climate-focused investors.
8. Darwinium, a next-gen digital security and fraud prevention platform that employs edge-based AI to combat online fraud across various industries, raised an $18M Series A led by U.S. Venture Partners (USVP) w/ Blackbird, Airtree Ventures, and Accomplice.
9. Objective, Inc., a company developing an AI-native search platform called Objective Search designed to enhance website and app search functionality by using AI and ML to provide more natural queries and results, raised $13M in funding led by Matrix w/ Two Sigma Ventures and others.
10. Reality Defender, a cybersecurity company specializing in detecting and combating AI-generated media, including deepfakes, to safeguard against disinformation, raised a $15M Series A from DCVC, Comcast, ex/ante, Partnership Fund for New York City, Rackhouse Venture Capital, and Nat Friedman’s AI Grant.
11. Statement, a platform offering "cash intelligence" for companies dealing with multiple banks by assisting in liquidity management and cash flow forecasting using AI, raised a $12M Seed round led by Glilot Capital Partners w/ Citi, Mensch Capital Partners, Titan Capital, and Operator Partners.
12. Cognitive Space, a leader in intelligent space automation leveraging AI to provide SaaS services for satellite constellations, raised $4M in Seed+ funding from York IE, Draper Associates, and Dolby Family Ventures.
13. Gero, a company seeking to discover cures for age-related diseases and halt the aging process using generative AI to analyze real-world health data, raised a $6M Series A extension led by Melnichek Investments w/ VitaDAO and Leonid Lozner.
14. Bluebirds, an AI-driven platform aimed at revolutionizing outbound sales by using AI to discover and scale unique triggers for personalized outreach, raised a $5M Seed round led by Lightspeed Venture Partners w/ Y Combinator, 1984 Ventures, SOMA Capital, and sales tech veterans, including Godard Abel and Dharmesh Shah.
15. Aindo, a company specializing in synthetic data technology utilizing generative AI to create privacy-protected artificial data that replicates the characteristics of real data, raised a €6M Series A led by United Ventures.
That's all for today's email! If you want more please follow us at the social channels linked below, or check out our website!
How'd you like today's email? |
Share our newsletter: If you like our work please share/forward this email with your friends, colleagues, and family. It's the best way to support us!
If this email was forwarded to you please sign up here to continue receiving them.
Want your content, product, jobs, or event featured in our newsletter? Reply to this email with the details, and our team will reach out to you.
Do you use AI for work? Tell us how, and you could be featured in our newsletter!
Check out our website for more resources, including a list of AI investors, products, events, and twitter follows.
For an archive of all our posts, click here.
We'd love to hear from you! You can always leave us comments or feedback by replying to this email!