Interacting with GCP programmatically
Before we analyze the role assignments, we must decide how we will interact with GCP, as we won’t use the console for this.
Working with clouds such as AWS, the Python SDK (boto3) is a no-brainer. However, the choice between the API Client, Python Cloud Client Libraries, and the Gcloud CLI for GCP is largely a matter of preference. Due to personal preference, I almost exclusively use the Gcloud CLI for most of my assessments.
The API interaction aims to gather information relevant to this article, meaning any information related to Roles and Permissions. Speicfically, I am interested in the following:
- The Role assignments at all levels
- What roles does the principals have
- What permissions does the role have
Querying every single resource
This approach involves querying the organization and looping through all folders and projects to determine their policy assignment. It can also be extended to resources.
Limitations of this approach:
- A lot of queries if you want to cover every single resource - slower to run, and inherited permissions will be listed multiple times unless you build logic for it.
There are plenty of scripts and resources for this available online, such as GCP-IAM-Privilege-Escalation and GitLab’s blog post.
Querying through Asset Inventory
Asset Inventory is great for whatever you are assessing in GCP. It has a single API that can return detailed information about different resources and their configurations, and we found it to be especially helpful when assessing IAM.
Through Asset Inventory, we can run a single query to gather all resources:
gcloud asset search-all-iam-policies \\
--scope=organizations/<ORG_ID> \\
--format=json > iam-policies.json
Possible limitations of this approach is that some services may not be supported by Cloud Asset Inventory, and that it outputs inherited permissions each time.
I still find this way to be superior, as it is much faster, and I can further process the data.
Assessing Privileges for roles
The built-in roles may change over time, and custom roles can exist either on a project-level or an organization-level. We want to get the the full details of what permissions are associated with any of the roles we've uncovered, from the JSON outputted in our previous step.
We’ve added an example on how this can be doing using gcloud cli wrapped with Python:
import json
import subprocess
import sys
from collections import defaultdict
def run_command(command):
"""Run a shell command and return its output"""
try:
result = subprocess.run(command, shell=True, check=True, capture_output=True, text=True)
return result.stdout.strip()
except subprocess.CalledProcessError as e:
print(f"Error executing command: {command}")
print(f"Error message: {e.stderr}")
return None
def get_project_id_from_number(project_number):
"""Convert a project number to a project ID"""
if not project_number:
return None
command = f"gcloud projects describe {project_number} --format='value(projectId)'"
return run_command(command)
def get_role_permissions(role, project=None, organization=None):
"""Get permissions for a role, handling both predefined and custom roles"""
if role.startswith("roles/"):
# For predefined roles
command = f"gcloud iam roles describe {role} --format=json"
elif project:
project_id = get_project_id_from_number(project)
if not project_id:
print(f"Could not resolve project ID for number: {project}")
return None
else:
project_id = project
# This is a project-level custom role
role_name = role.split("/")[-1]
command = f"gcloud iam roles describe {role_name} --project={project_id} --format=json"
elif organization:
# This is an org-level custom role
role_name = role.split("/")[-1]
command = f"gcloud iam roles describe {role_name} --organization={organization} --format=json"
else:
return None
output = run_command(command)
if not output:
return None
try:
role_data = json.loads(output)
return {
"name": role_data.get("name", ""),
"title": role_data.get("title", ""),
"description": role_data.get("description", ""),
"permissions": role_data.get("includedPermissions", [])
}
except json.JSONDecodeError:
print(f"Error decoding JSON from role description for {role}")
return None
def main(json_file, output_file=None):
# Load the IAM policies data
with open(json_file, 'r') as f:
policies = json.load(f)
if isinstance(policies, dict):
policies = [policies]
# Extract unique roles
roles_info = defaultdict(set)
for policy in policies:
project_id = policy.get("project", "").replace("projects/", "")
org_id = policy.get("organization", "").replace("organizations/", "")
for binding in policy.get("policy", {}).get("bindings", []):
role = binding.get("role", "")
if role:
roles_info[role].add((project_id, org_id))
# For each unique role, look up its permissions
results = {}
for role, scopes in roles_info.items():
print(f"Processing role: {role}")
# Just use the first project/org scope we found for the role
project_id, org_id = next(iter(scopes))
try:
if role.startswith("roles/"):
# Predefined role
role_data = get_role_permissions(role)
elif role.startswith("projects/"):
# Project-level custom role
role_parts = role.split('/')
if len(role_parts) >= 3:
# Extract project ID from role path
project_from_role = role_parts[1]
role_data = get_role_permissions(role, project=project_from_role)
else:
role_data = get_role_permissions(role, project=project_id)
else:
# Try as an organization-level custom role
role_data = get_role_permissions(role, organization=org_id)
# Try as a project-level custom role
if not role_data:
role_data = get_role_permissions(role, project=project_id)
except Exception as e:
print(f"Error processing role {role}: {str(e)}")
continue
if role_data:
results[role] = role_data
if output_file:
with open(output_file, 'w') as f:
json.dump(results, f, indent=2)
print(f"Results written to {output_file}")
else:
print(json.dumps(results, indent=2))
if __name__ == "__main__":
if len(sys.argv) < 2 or len(sys.argv) > 3:
print("Usage: python script.py <iam_policies.json> [output.json]")
sys.exit(1)
input_file = sys.argv[1]
output_file = sys.argv[2] if len(sys.argv) == 3 else None
main(input_file, output_file)
An example of the output after it’s been processed:
{
"projects/o3c-user-sandbox/roles/StorageBucketAdmin": {
"name": "projects/o3c-user-sandbox/roles/StorageBucketAdmin",
"title": "StorageBucketAdmin",
"description": "",
"permissions": [
"storage.buckets.create",
"storage.buckets.list",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.get",
"storage.objects.update"
]
},
"projects/o3c-user-sandbox/roles/app_security_role": {
"name": "projects/o3c-user-sandbox/roles/app_security_role",
"title": "app_security_role",
"description": "",
"permissions": [
"clientauthconfig.clients.listWithSecrets",
"firebaserules.rulesets.get",
"firebaserules.rulesets.list",
"storage.buckets.get",
"storage.objects.get"
]
},
"projects/o3c-user-sandbox/roles/app_security_role_data": {
"name": "projects/o3c-user-sandbox/roles/app_security_role_data",
"title": "app_security_role_data_scanning",
"description": "",
"permissions": [
"bigquery.jobs.create",
"bigquery.tables.get",
"bigquery.tables.getData",
"bigquery.tables.list",
"cloudsql.backupRuns.create",
"cloudsql.backupRuns.delete",
"cloudsql.backupRuns.get",
"cloudsql.backupRuns.list",
"cloudsql.instances.get",
"cloudsql.instances.list",
"storage.buckets.getObjectInsights",
"storage.objects.get",
"storage.objects.list",
"storageinsights.reportConfigs.create",
"storageinsights.reportConfigs.get",
"storageinsights.reportConfigs.list",
"storageinsights.reportDetails.get",
"storageinsights.reportDetails.list"
]
},
"projects/o3c-user-sandbox/roles/app_security_role_disk_analysis": {
"name": "projects/o3c-user-sandbox/roles/app_security_role_disk_analysis",
"title": "app_security_role_disk_analysis",
"description": "",
"permissions": [
"compute.disks.get",
"compute.disks.useReadOnly",
"compute.globalOperations.get",
"compute.images.get",
"compute.images.getIamPolicy",
"compute.images.list",
"compute.images.useReadOnly",
"compute.snapshots.get",
"compute.snapshots.list"
]
},
"projects/o3c-user-sandbox/roles/app_security_role_forensic": {
"name": "projects/o3c-user-sandbox/roles/app_security_role_forensic",
"title": "app_security_role_forensic",
"description": "",
"permissions": [
"compute.disks.createSnapshot",
"compute.snapshots.create",
"compute.snapshots.delete",
"compute.snapshots.setLabels",
"compute.snapshots.useReadOnly"
]
},
"projects/o3c-user-sandbox/roles/app_security_role_registry_scanning": {
"name": "projects/o3c-user-sandbox/roles/app_security_role_registry_scanning",
"title": "app_security_role_registry_scanning",
"description": "",
"permissions": [
"artifactregistry.repositories.downloadArtifacts",
"storage.objects.get"
]
},
"projects/o3c-user-sandbox/roles/app_security_role_serverless_scanning": {
"name": "projects/o3c-user-sandbox/roles/app_security_role_serverless_scanning",
"title": "app_security_role_serverless_scanning",
"description": "",
"permissions": [
"artifactregistry.repositories.downloadArtifacts",
"artifactregistry.repositories.get",
"cloudfunctions.functions.get",
"cloudfunctions.functions.sourceCodeGet",
"run.revisions.get",
"storage.objects.get",
"storage.objects.list"
]
}
}
Correlating the assignments into effective privileges
Once we have all the assignments, we also need to understand what capabilities each role has.
My favourite lookup for anything GCP permissions related is https://gcp.permissions.cloud, which has annotations such as ‘data access’ and ‘possible privesc’. Another source is the permission-mapping.yaml file for GCPwn. There’s also several write-ups I would recommend, particularly the one from GitLab and Rhino Security Labs. What they have in common is that it provides a source for understanding which privileges associated with the different roles provides sensitive actions, including Data Access or Privilege Escalation and how it can be abused.
Mapping these sources is a simple data processing task that will give you a good overview of exactly which roles are privileged, based on the previously outputted data.
Making sense of all the data
Now that you have an overview of all the assignments and its effective permissions, you need to understand which assets are part of any given scope in order to quantify the risk. How we approach this is on case-by-case basis, where in some cases entire projects or folders are classified as important, while for others it may be specific resources in specific projects.
You now need to correlate the link between Users that can assume privileged Service Accounts and What permissions a service account holds. This is well covered in previously referenced resources, so I don’t intend to go in detail here.
Our key takeaway is that using the Cloud Asset Inventory to query permissions, is an efficient way of getting all that data and gaining an overview to assess identity risks in GCP.
In our next article on this topic, we will share our experience achieving Least Privilege with GCP Privileged Access Manager.
If you're interested to learn more about our Cloud Security Assessment offering: https://www.o3c.no/services/cloud-security-assessment