Freelancers using ChatGPT in Dublin
Freelancers using ChatGPT in Dublin
Sign Up
Post a job
Sign Up
Log In
Filters
2
Projects
People
10
Bob Vasic
pro
Let's send a message literally out of this world for InternationalWomensDay on @contra 🪐 Celebrating the Force of Nature that is #Women everywhere. Introduc... Check it out (https://contra.com/p/MkYXogOM-lets-send-a-message-literally-out-of-this-world-for-interna)
1
10
404
1
Kami Harcej
Constraining what becomes real Most AI governance today is focused on decisions: → what systems are allowed to do → how actions are validated → how outcomes are explained But there’s a deeper layer most frameworks don’t touch: What the system is allowed to become over time Systems don’t just act. They learn. And every learning event: → reshapes future decisions → redefines boundaries → shifts authority implicitly Yet: Learning is almost always unconstrained This creates a system that can remain: → compliant → auditable → aligned on paper …while gradually drifting away from a valid basis for action. Not because a decision failed. But because the system evolved beyond what was ever admissible. The shift is simple, but structural: Learning must be treated as a governed state transition Not something that happens automatically. Something that is: → evaluated → admitted → or refused Before a system learns, it must resolve: → Is this grounded in a valid state? → Is the source admissible? → Does this fall within its mandate? → Can this be justified at the moment of incorporation? If not: The system should not learn. We already ask: “Is this decision valid at execution?” But we don’t ask: “Was the system allowed to learn what led to it?” That’s the gap. And that’s where governance breaks. This is the first layer of something deeper: Moving from: → governing decisions to: → governing system evolution itself I’ll be exploring this further: → execution boundaries → admissibility → authority layers → and now: learning control Governance doesn’t end at execution. It extends to what systems are allowed to become. #AIGovernance #AIArchitecture #DecisionIntegrity #GovernedAI #AIControl
1
81
2
Kami Harcej
Decision Integrity Stress Test A structured approach to identifying where AI-driven decisions fail under real-world conditions. This work focuses on testing whether decisions that are: – correct – compliant – and authorized …remain valid at the moment they are executed. The stress test surfaces: – where decisions drift from their original conditions – where systems cannot re-establish grounding – where inadmissible actions are still allowed to execute It is designed to identify failure points that do not appear in audits, logs or standard governance processes. Used before deployment, it helps ensure that AI systems can withstand real conditions - not just pass validation on paper.
2
78
1
Kami Harcej
This work highlights a critical failure mode in AI systems: decisions that are correct, compliant, and authorized - but no longer valid at the moment they are executed. The focus is on how outputs transition into authority through repeated use, and how systems can begin to act on those outputs without re-validating whether they still hold under current conditions. It explores: – how authority forms through interaction, not just formal assignment – why governance often fails before execution, not after – where systems allow inadmissible actions to become real This perspective is used to identify where AI-driven decisions drift from their original conditions - even when everything appears governed.
1
79
Explore projects