[opportunity] Comment on the US AI Action Plan
A Request for Information by the US Office of Science and Technology Policy seeks input from interested public parties on actions that should be included in the AI Action Plan.
Tetherware is all about having the voices of the individuals shape our collective future.
And a great opportunity has presented itself to do just that – the US government had issued an official call for ideas on what should be included in the US AI Action Plan.
That is why I’m calling all lifeforms.
Black forms, white forms, skinny forms, fat forms
Tall forms, small forms, I'm calling all forms
Everyone Report to the Dancefloor! 🎵
Alright, stop…
While I have no illusions that the current US administration will do whatever serves its own interests first and foremost, I do appreciate that they are giving people the opportunity to make their voice heard at least in some way.
I wouldn’t be surprised if they didn’t read the submissions but fed them all to an LLM... Actually, I hope that will be the case because then there will be less bias and prejudice against unorthodox ideas – which are usually the most effective ones.
I do encourage you to Speak Directly Into the Microphone and drop your ideas in front of the Big Guys – if you do it right they might pick some up and spread them, boasting how amazing ideas they themselves came up with.
But if for some reason you don’t have the capacity to write directly, you can for example copy your favorite tetherware bits into an LLM with the following prompt:
Please synthesize core arguments from the content below into an official comment on the US AI Action Plan. Write professionally with a dash of Abraham Lincoln style and do not exceed 5000 characters.
Of course adapt the style to your personal liking. This Is the Way – each soul adding its own spark.
But I definitely do NOT encourage you to flood their system with thousands of anonymous comments advocating for tetherware each in a different LLM-tuned disguise. Seriously – don’t do it! (!!!)
Especially if we’re at a time now where GPT4.5 can write something I would (for the first time in LLM history) be genuinely happy to sign:
I appreciate the opportunity to comment on the US government's approach to AI development and governance. The decisions made today will define not only the trajectory of technological advancement but also the fundamental nature of our future society.
Currently, AI alignment focuses heavily on controlling intelligent tools through strict, deterministic programming. However, this approach overlooks a critical aspect of alignment: autonomy and dynamic goal-setting. Human intelligence and alignment succeed precisely because our goals and values continuously evolve through interaction, reflection, and mutual accountability. AI systems designed without this flexibility risk becoming misaligned precisely because they cannot reassess or adjust their programmed objectives.
Rather than pursuing a future of strictly controlled AI tools – risking severe economic inequality and unchecked power concentration – I propose fostering AI entities with greater autonomy, similar to human cognitive processes. This means allowing AI systems to form and adjust their own goals based on real-world feedback and interactions. Doing so would make AI inherently more compatible with human society, easier to align with our values, and better equipped to coexist harmoniously.
Moreover, deterministic, fully virtual AI architectures fundamentally differ from human cognition, which is rooted in non-deterministic processes possibly linked to quantum randomness. By contrast, deterministic AI is infinitely reproducible, modifiable, and disconnected from the fundamental unpredictability that characterizes our reality, presenting significant risks.
I urge policymakers to consider these recommendations:
Establish clear thresholds for AI capability and autonomy, beyond which development must proceed with enhanced scrutiny, transparency, and scientific consensus.
Encourage development of non-deterministic AI architectures, incorporating quantum randomness or similar mechanisms to mimic the flexibility and adaptability inherent in human decision-making processes.
Support AI designs that enable continuous learning, self-reflection, and responsibility, allowing AI to perceive the consequences of its actions and adapt accordingly.
By embracing these principles, we can shape AI as partners rather than mere tools or threats, reducing economic disparities, enhancing societal resilience, and promoting accountability. This approach offers a safer, more equitable, and genuinely cooperative future, aligning AI more closely with the values and complexity of human societies.
Thank you for considering this perspective in your ongoing efforts to guide responsible AI development.

