Q. Can a user exploit security flaws present in ActivitySim to gain unauthorized access to agency computer systems?
ActivitySim is NOT a public facing tool or service because it is designed specifically to allow arbitrary code execution. Models can be built to use any Python code. ActivitySim must be run on a machine where all the users able to access the tool are trusted users, and all files accessible and run by ActivitySim come from a trusted source.
Q. Can a malicious outside actor create new security vulnerabilities by adding code to ActivitySim, which then may create security vulnerabilities in agency computer systems when ActivitySim is installed?
It is very unlikely that a malicious outside actor can create new security vulnerabilities by adding code to ActivitySim. While we do accept pull requests from the public, the frequency of those PRs is extremely low, and these outside PRs are usually simple (e.g. fixing a typo in documentation), so security-focused reviews have typically not been necessary.
A larger risk comes from ActivitySim’s dependencies. To run the tool, it is necessary to install a significant number of other public open source software libraries, including numpy, pandas, xarray, and others. In theory, any one of these dependencies could contain a significant software vulnerability. This kind of vulnerability has been found in the past (e.g. https://nvd.nist.gov/vuln/detail/cve-2021-44228). The potential presence of these vulnerabilities is well known and is basically unavoidable. It is beyond the scope or capabilities of the ActivitySim consortium to review dependencies for such risks. In practice, the bulk of these open-source dependencies are so widely used and reviewed that it is unlikely that a malicious actor could insert bad code in them and go undetected for long. Moreover, nearly all such risks can also be effectively mitigated by running ActivitySim in an isolated container (e.g. Docker) that has no access to other parts of the system when it runs. This sandboxing is an easy way to protect against malicious attackers using our dependency stack as an entry point into your systems.
Q. Has there been any security audits undertaken by security specialists to review the current code base? Is there any plan to do so in future?
A security audit was conducted in 2022, not by the consortium but by KPMG (Australia), although the report was shared with the consortium. The report seems to be a generic system scan report generated by a security tool, not a reasoned analysis and assessment by a security professional. The audit did not consider how the platform is actually used, and flagged numerous security “flaws” such as:
Use of non-cryptographically secure random numbers – we use random numbers for simulation not cryptography, so this is irrelevant
CRLF Injection – since we explicitly allow arbitrary code execution, all input files must already be trusted
Directory traversal – since we allow arbitrary code execution, all users must be trusted so directory traversal is not problematic
The consortium reviewed this report, but no actions were required at the time to address these specific “security flaws” as they were not actually relevant or applicable for our work.
Q. Before new code is accepted to the repository, is there a review to consider how well the new code has adopted security-aware coding practices?
New code is subject to review, and cannot be committed to the “main” branch except via a Pull Request (PR). PRs are not merged until they are reviewed by at least one other team member. Only authorized users can approve PRs for merging. The list of authorized users is short, including only key staff from each of the 3 bench contracting teams, as well as a few people from member agencies. The vast majority of new code is written by employees of the bench contractors or member agencies. We do accept pull requests from the public, although the frequency of those PRs is extremely low. We do not have a formal process for more rigorously reviewing submitted outside PRs, but typically outside PRs are simple (e.g. fixing a typo in documentation) and so reviewing those contributions has been straightforward.