Accountable
Purpose
Agency & Interaction
Enables you to control or interact with aspects of a space or a technology.
Fire & Emergency
Supports services that ensure public safety and health related to emergencies.
Health
Supports the measurement or monitoring of the aspects of the physical environment that impacts human health, such as radiation or air quality, or in specific contexts such as the workplace.
Inform
Supports the provision of information, for example about a location, a service, or to provide assistance
Planning & Decision-making
Supports the development of future plans; or to enable or measure the impact of a decision.
Safety & Security
Enables a safe and/or secure environment, for example for the purposes of fire safety, home security or ensuring safe passage in places such as airports or roads
Switch
Supports a mechanical function - such as turning a device on or off, opening or closing, or adjusting brightness and intensity.
Waste Management
Supports the handling and disposal of waste, including as recyclables, compost and hazardous materials.
Decision Type
Accept or deny
A binary decision-making process where the AI/algorithm evaluates information against specific criteria to produce a yes/no outcome. The system processes inputs through predetermined rules or learned patterns to determine if a request, application, or condition meets the required threshold for approval. Examples include civil service applications, school admissions, loan approvals, or benefit eligibility determinations.
Anomaly Detection
A decision-making process where the AI/algorithm identifies unusual patterns or outliers that deviate from expected behavior. The system establishes a baseline of normal activity and flags significant deviations. Examples include fraud detection, infrastructure monitoring, or public health surveillance systems.
Matching
A decision-making process where the AI/algorithm pairs entities (people, resources, opportunities) based on compatibility or optimal fit. The system evaluates multiple variables to determine the most appropriate connections. Examples include housing assistance placement, job applicant matching, or pairing service providers with community needs.
Personalization
A decision-making process where the AI/algorithm tailors content, recommendations, or services to specific individuals based on their preferences, behaviors, or characteristics. The system analyzes personal data to customize experiences. Examples include content recommendations, personalized learning paths, or targeted communications from public services.
Priority ranking
A decision-making process where the AI/algorithm evaluates multiple items or requests and arranges them in order of importance or urgency based on specific criteria. The system assigns relative values to each item and sorts them accordingly. Examples include emergency response triage, customer service ticket ordering, maintenance request scheduling, or public project funding priorities.
Allocation of resources
A decision-making process where the AI/algorithm determines how to distribute limited resources (such as time, money, personnel, or materials) across different needs or requests. The system analyzes factors like availability, demand, and priority to optimize distribution. Examples include determining trash pickup routes, assigning staff schedules, or distributing public services across neighborhoods.
Input Dataset
Spatial
Data that represents a location, such as an address, a place name or geographic coordinates; or a structure, such as a floorplan.
Processing Algorithm or AI
Large Language Model
A type of AI system trained on vast amounts of text data that can understand, generate, and manipulate human language. These models can perform tasks like writing text, answering questions, summarizing content, and translating languages. They work by predicting what words are likely to come next in a sequence.
Optimization Algorithm
A mathematical process that finds the best solution from all possible solutions for a given problem, such as determining the most efficient route, schedule, or resource allocation. These algorithms work by systematically evaluating different possibilities to maximize or minimize specific outcomes like cost, time, or resources.
Privacy-Preserving Transformation
A technology that modifies data to remove or obscure personally identifiable information while preserving the underlying patterns needed for analysis. These systems apply techniques such as masking, tokenization, or aggregation to protect individual privacy. In public spaces, de-identification processing enables valuable insights from data collection without compromising personal information. The technology typically creates a barrier between raw collected data and the information used for decision-making.
Recommendation Systems
A technology that suggests items, services, or actions to users based on their preferences, behavior patterns, or similarities to other users. These systems analyze historical data to predict what might be most relevant or useful to a specific individual or group. In public spaces, recommendation systems can personalize information delivery, improve wayfinding, or enhance service efficiency. They typically employ filtering techniques that balance individual preferences with broader usage patterns.
Sentiment Analysis
A technology that evaluates emotional tone and subjective information from text, images, or video. These systems analyze input data to determine whether the expressed sentiment is positive, negative, or neutral. In public spaces, sentiment analysis can be used to assess crowd mood, evaluate public response to services, or detect rising tensions. The technology typically works by identifying emotional indicators and comparing them against trained models of sentiment patterns.
Text-to-Speech
Technology that converts written text into spoken voice output. These systems analyze text and generate synthetic speech that mimics human voice patterns, intonations, and pronunciations. Text-to-Speech is commonly used in accessibility tools, virtual assistants, and automated customer service systems.
Time Series Forecasting
A method that analyzes historical data points collected over time to predict future values. This technology identifies patterns and trends in time-ordered data to make predictions about future events or behaviors. Common applications include weather forecasting, stock market prediction, demand planning, and resource management.
Output Dataset
Spatial
Data that represents a location, such as an address, a place name or geographic coordinates; or a structure, such as a floorplan.
Access
Available to 3rd parties
Data is available to 3rd parties not involved in the data activity. This does not always mean that data is being resold.
Available to download
Data that can be accessed and downloaded online, either for free or for a fee
Available to me
Available to me but not to other individuals. For example, as an individual you have access to all your electronic toll records for your car, but other individuals do not have access to that.
Available to the accountable organization
Data is available to the accountable organization
Not available to me
Not available to me or other individuals. As an individual, there isn't a way for you to access this data.
Not available to the accountable organization
Data is not available to the accountable organization
Not available to vendor
Data is not available to the data collection or technology provider.
Retention
Risks & Mitigation
Compromise of privacy
The risk that the AI system could access, expose, or deduce private information about individuals without their consent. This includes risks of data breaches, misuse of collected data, or the system's ability to infer sensitive characteristics from seemingly non-sensitive inputs. Mitigations may include data minimization practices, robust security measures, differential privacy techniques, and rigorous access controls.
Unforeseen Use or Function Creep
The risk that AI systems originally deployed for specific, limited purposes gradually expand in scope or are repurposed for applications beyond their original intent without proper evaluation or transparency. What begins as a system for one purpose (e.g., traffic management) might expand to others (e.g., law enforcement) without adequate assessment of new risks or public notification. This can undermine public trust and potentially lead to uses that weren't properly designed for or vetted.
Opaque decision-making
The risk that AI systems make decisions through processes that are difficult or impossible for humans to understand or explain. This lack of transparency makes it challenging to identify errors, biases, or other issues. Mitigations may include using interpretable AI models when possible, developing explanation methods, and maintaining documentation of system design and training processes.
Over-reliance and automation bias
The risk that people place excessive trust in AI systems, leading to insufficient human oversight or inability to question algorithmic decisions. This can result in uncritical acceptance of AI outputs even when they are incorrect or harmful. Mitigations may include clear communication about system limitations, training for users on appropriate reliance, and maintaining meaningful human involvement in critical decisions.
System Drift and Temporal Validity
The risk that an AI system's performance degrades over time as real-world conditions change from those present in training data. This occurs when the relationships between variables in the real world evolve, but the model remains static, leading to increasingly inaccurate outputs. Examples include urban planning models that don't account for demographic shifts or transportation patterns that change seasonally or with new infrastructure.
Unequal performance across groups
The risk that the AI system performs differently (typically worse) for certain demographic groups based on characteristics such as race, gender, age, disability status, or socioeconomic background. This can lead to biased or unfair outcomes that disproportionately impact vulnerable communities. Mitigations may include diverse training data, routine fairness audits, and regular performance monitoring across different demographic groups.
Rights
Right to Purpose Limitation
The right to ensure your data is only used for the specific purposes that were clearly stated when it was collected. This prevents organizations from using your data for new, unrelated purposes without your knowledge or consent. The data collector must specify and document the intended purposes before collection begins, and adhere to these limitations.
Right to Access
The right to request and receive information about what personal data an AI system has collected about you, how this data is being used, and what decisions have been made using this information. This includes the right to obtain a copy of your data in a readable format.
Right to Algorithmic Transparency
The right to understand how an AI system makes decisions that affect you, including meaningful information about the logic involved, the significance of the processing, and the likely consequences. This information should be provided in clear, plain language that allows you to understand how the system works and how it arrived at a particular decision.
Right to Be Forgotten
The right to request the deletion of your personal data from an AI system under certain conditions, such as when the data is no longer necessary for its original purpose, when you withdraw consent, or when there is no legitimate interest in continuing to process it. This includes the right to have the system "unlearn" information derived from your data where technically feasible.
Right to Contest
The right to challenge decisions made by an AI system that affect you, particularly when these decisions have legal or similarly significant effects. This includes the right to request human review of automated decisions, provide additional information, express your point of view, and have the decision reconsidered based on your input.
Right to Non-discrimination
The right to be free from discriminatory treatment by AI systems based on protected characteristics such as race, gender, age, religion, disability, or sexual orientation. Organizations must implement and demonstrate appropriate technical and organizational measures to prevent discriminatory outcomes from their AI systems.