Projects using Google Docs in Dublin
Projects using Google Docs in Dublin
Sign Up
Post a job
Sign Up
Log In
Filters
2
Projects
People
Message
2
Max Ivory
pro
TOAST & TANGO
2
25
Message
0
Camila Lins
Performance Management cycle
0
3
Message
3
Kami Harcej
“When decisions become real without valid authority” This work examines how AI-driven decisions can appear valid while lacking the authority or conditions required to become actionable. It focuses on a critical gap: – outputs are treated as admissible – decisions are accepted as valid – but authority and conditions are never fully established The analysis highlights: – where systems assume authority rather than explicitly validating it – how admissibility is inferred instead of resolved – where decisions become actionable without sufficient grounding – why systems can execute correctly while lacking valid basis The goal is to identify where decisions are allowed to become real, not because they are admissible, but because nothing prevents them from being treated as such.
3
125
Message
0
Sean Gibney
Team Manager For Just Eat Ireland
0
8
Message
0
Veniamin Livandovskyi
UX/UI Design of mobile app
0
15
Message
1
Max Ivory
pro
AREBUS
1
11
Message
2
Kami Harcej
Decision Integrity Stress Test A structured approach to identifying where AI-driven decisions fail under real-world conditions. This work focuses on testing whether decisions that are: – correct – compliant – and authorized …remain valid at the moment they are executed. The stress test surfaces: – where decisions drift from their original conditions – where systems cannot re-establish grounding – where inadmissible actions are still allowed to execute It is designed to identify failure points that do not appear in audits, logs or standard governance processes. Used before deployment, it helps ensure that AI systems can withstand real conditions - not just pass validation on paper.
2
82
Message
0
Max Ivory
pro
GO GUIDE
0
30
Message
2
Kami Harcej
This work demonstrates how AI outputs can appear valid while still being unsafe to act on under real conditions. It focuses on the gap between: – what a system produces – and what a system is actually allowed to execute The analysis highlights: – how outputs can pass evaluation but fail at execution – why risk emerges at the transition from intent to action – where systems lack constraints at the point of commit The goal is to identify where AI-driven decisions become actionable without being fully validated against current state, authority and conditions.
2
87
Message
1
Max Ivory
pro
VITAL-X
1
9
Message
0
Max Ivory
pro
CLUBHAUS
0
13
Message
1
Max Ivory
pro
GOOD HEAVENS
1
7
Message
0
Max Ivory
pro
PLOTPOINT
0
20
Explore projects