Alright, alright, alright! So, you dove head-first into the chaotic jungle of GCP governance, right? And let’s be honest, it’s not that your colleagues are on a mission to turn the cloud into a Wild West; it’s just that sometimes, when you’re exploring new territories, you might step on a few rakes or trip over some buried treasure. 🤠💰
You had a eureka moment: Why not be the Sherlock Holmes of your cloud environment?🕵️♂️ You’re not gonna be the Cloud Police handing out citations left and right. Nah, you’re more like a savvy detective with a magnifying glass, peeping at the scene and thinking, “Hmm, what’s really going on here?” 🧐
Your master plan: “Audit-as-Code” and “Policy-as-Code,” but make it like a reality TV show. No eliminations, no drama, just discreet cameras capturing the action so you can review the “footage” and decide the season finale. 🎥
In a nutshell, you’re setting up your own GCP Big Brother house. You won’t block anyone; you’ll just sit back with your popcorn 🍿 and get notified when Johnny accidentally spins up a VM that costs more than a used car or when Susie changes an IAM policy that could open the floodgates. 🌊
And the best part? You’re making it all happen like a script from your favorite series, automating the drama so you never miss an episode. 🎬
So, my friend, you’re not just cloud governing; you’re cloud groovin’! 😎🕺💃
with me no complicated stuff , i will always find a smooth way to do it with you , i don’t like Tech fancy Jargon neither people tech Fancy world and complicating stuff just to showoff 🚎 😙
More serious now 😆 my initiative is to implement a monitoring system for our GCP environment through auditing is good (i think). This kind of monitoring is crucial for large organizations that need to keep tabs on what’s happening in their cloud environments, especially for governance, compliance, and cost management.
Revised Explanation of the Solution:
Objective: To set up a monitoring system in Google Cloud Platform that keeps track of key events like IAM changes, project creation, and resource allocation without enforcing blocks or limitations. This “watch-only” approach provides a window into activities and allows us to plan next steps without hindering ongoing work. ( with no crying for 💰) yes i am silly as f**k
Technology Stack:
- Google Cloud Logging for audit logs
- Google Cloud Pub/Sub for event-driven architecture
- Google Cloud Functions for real-time processing
- Slack for notifications
Steps:
Create a Cloud Logging Sink
- Aim: To filter and forward specific logs of interest.( try to choose some events because processing all the event log can invoke a big cost )
gcloud logging sinks create service-account-audit \
pubsub.googleapis.com/projects/<project_id>/topics/service-account-audit \
--log-filter="protoPayload.methodName:\"google.iam.admin.v1.CreateServiceAccount\" OR protoPayload.methodName:\"google.iam.admin.v1.SetIamPolicy\"" \
--include-children \
--unique-writer-identity \
--organization=<organization_id>
Create a Pub/Sub Topic
- Aim: A messaging queue that will act as the middleman between the logs and the Cloud Function.
gcloud pubsub topics create service-account-audit
Create Cloud function
- Aim: A function that triggers when a new message arrives in the Pub/Sub topic and sends a Slack notification.
gcloud functions deploy audit-log-slack \
--runtime python37 \
--trigger-topic service-account-audit \
--set-env-vars "SLACK_WEBHOOK_URL=<your_slack_webhook_url>"
import base64
import json
import os
import requests
def process_audit_log_entry(event, context):
data = base64.b64decode(event['data']).decode('utf-8')
log_entry = json.loads(data)
event_type = log_entry['protoPayload']['methodName']
principal_email = log_entry['protoPayload']['authenticationInfo']['principalEmail']
resource_name = log_entry['resource']['labels']['project_id']
binding_deltas = log_entry['protoPayload']['serviceData']['policyDelta'].get('bindingDeltas', [])
text = f' \U0001F648 \U0001F649 \U0001F64A An event of type "{event_type}" was triggered by "{principal_email}" in project "{resource_name}".\n\n'
if binding_deltas:
text += f'Binding Deltas:\n'
for delta in binding_deltas:
action = delta.get('action')
role = delta.get('role')
member = delta.get('member')
text += f'- Action: {action}, Role: {role}, Member: {member}\n'
send_slack_notification(text)
def send_slack_notification(text):
print(text)
webhook_url = os.environ.get('SLACK_WEBHOOK_URL')
if not webhook_url:
raise ValueError('Slack Webhook URL must be set in environment variables.')
payload = {
'text': text
}
response = requests.post(
webhook_url, data=json.dumps(payload),
headers={'Content-Type': 'application/json'},
verify=False
)
if response.status_code != 200:
raise ValueError(f'Request to Slack returned an error: {response.status_code}, the response is:\n{response.text}')
- Above is the code that will parse and send notification to slack
Grant Permissions
- Aim: Make sure the Cloud Function can read from the Pub/Sub topic and access the audit logs.
gcloud projects add-iam-policy-binding <project_id> \
--member=serviceAccount:<audit-log-slack-function-service-account>@<project_id>.iam.gserviceaccount.com \
--role=roles/pubsub.subscriber
gcloud organizations add-iam-policy-binding <organization_id> \
--member=serviceAccount:<audit-log-slack-function-service-account>@<project_id>.iam.gserviceaccount.com \
--role=roles/logging.viewer
What’s Next?: After setting up this system, you can expand the set of events you monitor or integrate other events as needed.
Advice: Terraformed or terraformize this whole setup, which is a great practice for Infrastructure as Code, making it repeatable and version-controlled.
Peace From Tunisia