Forem

# safety

Discussions on childproofing, online safety, and keeping kids safe.

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
Non-decision-making AI governance with internal audit and stop conditions

Non-decision-making AI governance with internal audit and stop conditions

Comments
1 min read
How Digital HSEQ Systems Are Making Ships Safer (And Why Developers Should Care)
Cover image for How Digital HSEQ Systems Are Making Ships Safer (And Why Developers Should Care)

How Digital HSEQ Systems Are Making Ships Safer (And Why Developers Should Care)

Comments
4 min read
When High-Pressure Testing Becomes a Safety Engineering Problem
Cover image for When High-Pressure Testing Becomes a Safety Engineering Problem

When High-Pressure Testing Becomes a Safety Engineering Problem

1
Comments
2 min read
Why Your AI Needs Both Intuition and Rules
Cover image for Why Your AI Needs Both Intuition and Rules

Why Your AI Needs Both Intuition and Rules

Comments
3 min read
Common Solar Installation Safety Failures in South Africa (And How to Avoid Them)

Common Solar Installation Safety Failures in South Africa (And How to Avoid Them)

Comments
2 min read
Institutional audit of a non-decision AI framework (27-document corpus)

Institutional audit of a non-decision AI framework (27-document corpus)

Comments
1 min read
A Formal Verification of the XRP Ledger

A Formal Verification of the XRP Ledger

1
Comments
6 min read
Non-decision AI: stop conditions as a first-class control surface

Non-decision AI: stop conditions as a first-class control surface

Comments
1 min read
AI Safety Isn’t About Better Answers. It’s About Knowing When to Stop.

AI Safety Isn’t About Better Answers. It’s About Knowing When to Stop.

Comments
1 min read
If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stop Talking

If AI Doesn’t Improve Anything, It Should Stop Talking

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stay Silent

If AI Doesn’t Improve Anything, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stop Talking

If AI Doesn’t Improve Anything, It Should Stop Talking

Comments
1 min read
DELTΔX: A non-decision AI governance framework with explicit stop conditions

DELTΔX: A non-decision AI governance framework with explicit stop conditions

Comments 2
1 min read
Between Safety and Value: Defining 'Correctness' Through Nine Years of Journey

Between Safety and Value: Defining 'Correctness' Through Nine Years of Journey

Comments
8 min read
LLMs + Tool Calls: Clever But Cursed

LLMs + Tool Calls: Clever But Cursed

7
Comments
2 min read
Hallucinating Help
Cover image for Hallucinating Help

Hallucinating Help

1
Comments
9 min read
Safety vs Security in Software: A Practical Guide for Engineers and Infrastructure Teams
Cover image for Safety vs Security in Software: A Practical Guide for Engineers and Infrastructure Teams

Safety vs Security in Software: A Practical Guide for Engineers and Infrastructure Teams

Comments
9 min read
Autonomous Vehicle Reality Check: Smarter AI Through Self-Verification

Autonomous Vehicle Reality Check: Smarter AI Through Self-Verification

Comments
2 min read
LLM Context Window Stress Testing: Reliability Under Load
Cover image for LLM Context Window Stress Testing: Reliability Under Load

LLM Context Window Stress Testing: Reliability Under Load

9
Comments 1
4 min read
🐧 Hardening Linux: практическое руководство для безопасной работы
Cover image for 🐧 Hardening Linux: практическое руководство для безопасной работы

🐧 Hardening Linux: практическое руководство для безопасной работы

2
Comments
2 min read
AI Chatbot Developers: What's the "Other Safety" We Should Be Thinking About Now? User Protection.
Cover image for AI Chatbot Developers: What's the "Other Safety" We Should Be Thinking About Now? User Protection.

AI Chatbot Developers: What's the "Other Safety" We Should Be Thinking About Now? User Protection.

5
Comments
24 min read
Trust & Transparency: Why we updated our review system at mobile.de
Cover image for Trust & Transparency: Why we updated our review system at mobile.de

Trust & Transparency: Why we updated our review system at mobile.de

Comments
2 min read
Google Shibuya - AI Safety: how do you control what’s smarter than you?
Cover image for Google Shibuya - AI Safety: how do you control what’s smarter than you?

Google Shibuya - AI Safety: how do you control what’s smarter than you?

Comments
1 min read
loading...