Come and say hello 👋 Ask questions. Review the Community Guidelines.
Check out the Community Contest: SecOps MCP Challenge
Got a question? Need an answer? Let's connect!
Q&A, discussions, share, network
Your learning hub for all things security
Join a local meetup!
Discover, RSVP, connect
Hello. My company has recently adopted Google SecOps alongside our current ITSM tool in-which we use to conduct case management and handling. We would like to have SecOps automatically create and updates cases in our ITSM tool, however I don’t see any automated functionality to run things at case level.Could some advice be provided on how to move forward particularly with the following:Have cases created in ITSM when analyst change the case state to incident. Have new alerts added to our ITSM when alerts are added to cases in secops with incidents already raised. Have new alerts removed to our ITSM when alerts are removed from cases in secops with incidents already raised.I already have actions and integrations for our ITSM that work for these purposes, but I now need a trigger to automate them which I cannot find in SecOps.
how can we set the new community UI to use light theme?I found the dark theme not so great, I’d like to have light theme.
Announcing the release of a simple SecOps API Wrapper SDK: https://pypi.org/project/secops/ now using the SecOps API is as easy as: pip install secops from secops import SecOpsClient client = SecOpsClient() chronicle = client.chronicle( customer_id="your-chronicle-instance-id", project_id="your-project-id", region="us" ) Currently supported methods: UDM Search Stats Search CSV Export Entity Summaries Entity Summary from UDM Search List IOC Matches in Time Range Get Cases Get AlertsPlease let us know your feedback, and which other use cases you'd like to see supported.
Hi all,I'm looking for some clarity around the use and interpretation of the metadata.log_type and metadata.base_labels.log_types fields in Google SecOps / Chronicle UDM, particularly in relation to the log ingestion method and parser behaviour. The Standard Flow:When data (e.g., Windows Event Logs) is ingested via agents like BindPlane, Chronicle automatically detects the log source (e.g., WINEVTLOG) and uses the appropriate parser – in this case, Windows Event Parser. The parsed UDM ends up with fields such as:"metadata": { "logType": "WINEVTLOG", ... "baseLabels": { "logTypes": ["WINEVTLOG"] }}This makes sense – the original raw event (XML) is parsed and normalized, and the parser used is reflected here. My Question:When I take a pre-parsed UDM log (in the same format as above), and upload it manually via the Events Import API, the fields instead show:"metadata": { "logType": "UDM", ... "baseLabels": { "logTypes": ["UDM"] }}This behavior is expected, I suppose, since t
Hey Everyone!Welcome to the Google Cloud Security Community! We want to kick things off by getting to know each other better. This space is all about connecting, sharing, solving, and building the future of cloud security – and that journey starts with you!So, don't be shy! Drop a quick intro below and tell us:Who are you? (Your name, role, etc.) What's your cloud security superpower? (What area excites you most, or a cool project you're working on?) What are you hoping to learn or share here? (Let's help each other grow!)We're incredibly excited to learn from your unique experiences and build a vibrant hub where we can all protect, create, and innovate together.Can't wait to meet you all!Matt
I find it a little ironic that my little test firebase project that uses Firebase Auth and Microsoft and Google single signin got flagged by Google as being a suspicious phishing site and flashed a bright red screen on the browser right when I was demonstrating it to a customer.Whereas the single Google SignIn on this “security” forum shows a seemingly unrelated company called insided.com on the Signin pop up… I’m sure the badly written AI phishing identifier program will take care of this in dure course.
Hi,We are encountering discrepancies between the data shown in the Data Ingestion Health Dashboard and the SecOps data.Specifically, I am trying to view the unparsed event count in the dashboard, which outputs the unparsed count. However, when searching for the same data source in Chronicle and checking the unparsed logs under "Event Type" (searching 10,000 logs at a time), no results are returned.Could this discrepancy be due to the tenant using autonomous parsing? If so, why does the dashboard show a different result?Additionally, if autonomous parsing is enabled, I understand that Chronicle will parse those events and categorize them under "GENERIC_EVENT." Is there a way to identify these events, such as through a tag or another method?Thanks,Sumith.P
I’m excited to announce the open sourced launch of secops-toolkit. 🚀After months of work, I'm excited to share this new open-source repository designed to accelerate automation for Google SecOps.What is secops-toolkit? The secops-toolkit is an open-source repository that provides a comprehensive collection of Terraform blueprints, modules, and CICD pipelines designed for Google SecOps, providing modular and scalable automations for the wider customer and partner community.The GitHub repository includes:Terraform modules: for the automation of Google SecOps configurations, including Data RBAC, Rules, and Reference lists based on new resources from google provider. Blueprints: developed using Terraform and Python scripts for automated bootstrap of Google SecOps, provisioning of new tenants in MSSP-like architecture and comprehensive end-to-end deployment of solutions for BindPlane, SecOps Forwarder as well as a sample Anonymization Pipeline in Google Cloud Platform. This project heavily
We're bringing to you another Community challenge and this time it's about Model Context Protocol. MCP is a hot topic in the security world right now. For those just hearing about MCP, it allows AI models to communicate with and leverage the capabilities of diverse security tools. This helps enhance security workflows by ensuring models are contextually aware across multiple downstream services. With the ability to interact with security data in natural language, security teams can produce insights faster and scale their security operations. If you’re just getting started with the SecOps MCP server, check out our SecOps MCPserver content to learn more.We're excited to launch this new challenge and can't wait to see all the different ways you are using the Google SecOps MCP server to boost your security operations. Knowing our expert Community users, we bet you're doing incredible things. And we want to see what you're up to! This is your chance to contribute to the Community, show off
Hi,I have wrote a custom actions that gets attachments from the case wall and creates a html table, allowing the user to click a button to download the attachment. This had been working okay but we have now been experiencing this following error: File "/opt/siemplify/siemplify_server/bin/Scripting/PythonSDK/SiemplifyBase.py", line 170, in validate_siemplify_error raise Exception("{0}: {1}".format(e, response.content))Exception: 500 Server Error: Internal Server Error for url: http://server:80/v1alpha/projects/project/locations/location/instances/instance/legacySdk:legacyAttachmentData?attachmentId=3157&format=snake: b'{"errorCode":2000,"errorMessage":"An error has occurred. Search for Log identifier c55c79701e3341e380d50c8167df02c9 in the Google Cloud Logs Explorer.","innerException":null,"innerExceptionType":null,"correlationId":"c55c79701e3341e380d50c8167df02c9"}'This is happening when calling siemplify.get_attachment(attachment_id), although not entirely consistent but it s
Hey all, I’ve noticed that the Microsoft Defender ATP SOAR Integration has “Create Isolate Machine Task” and accompanying unisolate machine task actions. However, these find the host directly from the alert. I want to be able to have isolate and unisolate actions that can take in a hostname/hostid as an input so that they can be ran ad-hoc. Any ideas?
The reliability and accuracy of Webrisk API response is going bad and out of sync with the response to the malicious urls when browsed on the chrome.Is there any plan or initiative to address these issues?
Hi all, I’m having issues ingesting FortiNDR logs into Google SecOps using cfproduction docker forwarder here is the details: Any thoughts why this happen?
Hi everyone,I'm trying to send multiple alerts via a webhook to Chronicle SecOps and have them be grouped under the same case. However, when I use my job (see code below), even though both alerts are sent together and share common fields, they always end up in separate cases.import sysimport jsonimport requestsfrom urllib3.util import parse_urlfrom SiemplifyJob import SiemplifyJobfrom SiemplifyUtils import output_handler# ====================# Embedded Constants# ====================PROVIDER_NAME = "HTTP V2"INTEGRATION_NAME = "HTTPV2"API_REQUEST_METHODS_MAPPING = { "GET": "GET", "POST": "POST", "PUT": "PUT", "PATCH": "PATCH", "DELETE": "DELETE", "HEAD": "HEAD", "OPTIONS": "OPTIONS",}AUTH_METHOD = { "BASIC": "basic", "API_KEY": "api_key", "ACCESS_TOKEN": "access_token", "NO_AUTH": None,}DEFAULT_REQUEST_TIMEOUT = 120ACCESS_TOKEN_PLACEHOLDER = "{{integration.token}}"# ====================# Internal Classes# ====================class HTTPV2DomainMismatchExc
Hi everyone,I'm trying to send multiple alerts via a webhook to Chronicle SecOps and have them be grouped under the same case. However, when I use my job (see code below), even though both alerts are sent together and share common fields, they always end up in separate cases. import sysimport jsonimport requestsfrom urllib3.util import parse_urlfrom SiemplifyJob import SiemplifyJobfrom SiemplifyUtils import output_handler# ====================# Embedded Constants# ====================PROVIDER_NAME = "HTTP V2"INTEGRATION_NAME = "HTTPV2"API_REQUEST_METHODS_MAPPING = { "GET": "GET", "POST": "POST", "PUT": "PUT", "PATCH": "PATCH", "DELETE": "DELETE", "HEAD": "HEAD", "OPTIONS": "OPTIONS",}AUTH_METHOD = { "BASIC": "basic", "API_KEY": "api_key", "ACCESS_TOKEN": "access_token", "NO_AUTH": None,}DEFAULT_REQUEST_TIMEOUT = 120ACCESS_TOKEN_PLACEHOLDER = "{{integration.token}}"# ====================# Internal Classes# ====================class HTTPV2DomainMismatchEx
Hi everyone,I'm working on building a dashboard in Google SecOps where I want to implement a user-based filter. The idea is: when filters a specific username, the dashboard should dynamically display:That user’s login detailsPrevious cases/incidents the user was involved inLast login region(Any other relevant user metadata)I'm looking for suggestions or best practices on how to structure this kind of dashboard, particularly:What data sources should be connected or joined?How to design the filter to retrieve and reflect all these details for a selected user?Any recommended tools or integrations within GCP that simplify this?Thanks in advance for any guidance or examples!
The leaderboard is currently empty. Contribute to the community to earn your spot!
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.