4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy4thegreedy4thegreedy4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
4thegreedy
2024-02
decision making relies on categorization.
this categorization, confining an otherwise abstract thing to a name, is often instinctual.
a human acts, makes a decision, forms a response, based on the conditions they measure, and their preexisting biases or understanding of prior state.
if x do y
if y is programmed it’s preordained.
‘need’ for increased efficiency inevitably leads to the automation of deciding and acting, and as such, categorization through models capable of that instinctual identification.
because of the speed at which automated decisions take effect,
because of the potential scale of that effect,
because of the abstraction of direct culpability,
i’m thinking :
what actions can i, a human
partially unaware of my subconscious drives,
automate and enforce based on the simple categorization of human or not?
[ aka localized exploration of potentially sus motives ]
technology, in all its forms, has always been a medium and an intermediary.
with that in mind, here are some concepts i’m interested in :
. a known system is predictable == can be toyed with
. distract,obstruct
. transparency
. feedback
. error&”error”
. optimization toward what end