I have been diving into the topic of algorithmic transparency in government decision-making & I want to hear your thoughts! With AI and automated systems playing a bigger role in public services welfare, policing & healthcare—there is a lot of discussion around how transparent these systems should be.
How much transparency is actually feasible without compromising security or efficiency?
Are there any good examples of governments successfully implementing algorithmic transparency?
What kind of oversight should be in place to ensure fairness and accountability?
My opinionated and potentially contentious take, based on my research and observation, is that only total algorithmic transparency can create security and efficiency, monitored with official government and citizen oversight.
Security by obscurity fails eventually, and faster in the digital world. Efficiency/efficacy is improved when the stakeholder is engaged (the citizen is a stakeholder, being the subject).
Such an open approach would require the citizen to know the intended purpose of the algorithm, and if it prejudices them individually, they would need to understand why, and what to do about it.
Evidence shows that it is really really difficult to start with a closed process and open up later; having stakeholders along from the start and dealing with issues as they arise is more manageable. If there is not transparency from the outset, it then takes a near revolution to open up later.