Re: Snyk Grabs $70M More To Detect Security Vulnerabilities In Open-source Code And Containers

0 views
Skip to first unread message
Message has been deleted

Selesio Gurule

unread,
Jul 9, 2024, 12:56:05 AM7/9/24
to brenunrothse

Static application security testing (SAST), a subset of static code analysis, analyzes source code to identify vulnerabilities that could leave applications open to malicious attacks. SAST uses vulnerability scanning techniques that concentrate on the source code and bytecode to detect security issues like injection attacks or memory management issues. The scan is performed before the code is executable, which is why it is known as white box testing. By utilizing SAST tools, your applications are better protected from potential security threats.

SAST is a technique used to evaluate source code without actually executing it. It involves examining the program's structure and syntax to identify potential issues and errors, such as coding mistakes, security vulnerabilities, and performance bottlenecks. The process involves parsing the source code, building an abstract syntax tree, and applying various analysis techniques to detect issues. By providing early feedback on potential issues in the code, SAST can help improve software quality and reduce the likelihood of errors and security vulnerabilities.

Snyk grabs $70M more to detect security vulnerabilities in open-source code and containers


Download Zip https://ssurll.com/2yXQy0



Application developers have transitioned from writing custom code to assembling reusable components and open-source libraries. This approach enables rapid iteration and more continuous deployment for DevOps teams. But, it can also increase cyber risk if developers unknowingly use vulnerable open-source code.

The Snyk integration offers a seamless user experience within Tenable.io Container Security, with open-source code vulnerabilities in Ruby, Python and Node.js appearing alongside all other vulnerabilities in a single interface. Support for additional open-source libraries will be added over time. Simply navigate to the Image Details overview to view all vulnerabilities in a particular container image, including OS-level and open-source component issues.

The Go security team introduced govulncheck in September 2022. Govulncheck is an open-source command line utility that can analyze code and give warnings about known issues in Go modules or its standard library. Behind the scenes, govulncheck grabs its data from the Go vulnerability database, which is maintained and curated by the Go security team.

That's just not source code. It might be open-source libraries that you have dependencies, might be infrastructure as code scripts like terraform or cloud formation before you launch a production into the cloud, looking for like misconfigurations, if you will, and then containercontainer scanning as well to make sure that the source for containers like Docker file, because you obviously can determine what the base image is going to be, and any user software being installed that potentially doesn't have vulnerabilities in like the Linux kernels that you're using and the base images.

We can also scan directly into a container registry, like an ECR, for example. Obviously, most folks will store their containers after they have been built in a registry before they deploy, and then ultimately even in production, we can connect to like a Kubernetes cluster to scan container images right there and then as well. And I think when I talked about the sort of SDLC or the Software Development Life Cycle I believe it sort of extends into production as well because we want to monitor the code that gets deployed in production because if there's a Zero-Day vulnerability like Log4JLog4J, for example, obviously you want to immediately know about that, and patch the environments, but also you know there could be drift detection, right? So you wanna know if things changed in production.

But as you know developers like CLI as well, so if they've got a project already on their machine they can use the Snyk CLI to do the same thing. It basically scans the code that's already on their machines or the other asset types they have on their machines, and get sort of the same feedback before they sort of check-in their code into the source control. Similar with the IDE right? So that IDE is nothing but a sort of wrap around the CLI so they can get the same experience in the idea as they're developing the code and gets all that real-time feedback about any issues they have in not only their own code but the open-source containers IAC too. Does that make sense?

Okay, interesting. Yeah, no, I mean I think that's the future here too. I mean we're building a lot of correlation because we of course have the CSPM side of things we have the CIEM and also the threat detection, so we're building correlation across them which I think gives higher validity to alerts and sort of findings which, as you mentioned ultimately, we're all sort of struggling with alert fatigue and you know kind of working through making those sort of issues sound, sorry, making them sort of more important and more impactful so they're fixing the things that are most relevant instead of just getting a ton of alerts. I think that's going to be huge and you know something you guys can definitely really push on especially for the developers, right, because like I mean so many problems with a lot of the source code sort of analysis out there. There's just so many false positives ...

Well, that's the thing, right, you know you have to ask your question: are you going to write more software? Yes. Are you going to use more open-source code? Yes. Are you going to use more cloud services? Yes, well then you know you better get on board.

Lastly, take a look at open-source projects such as harden-runner from StepSecurity if you want to level up your GitHub Actions Runtime Security! It can help you detect and prevent risks like tampering with source code, dependencies, or artifacts during build time.

Sonatype Nexus Lifecycle is mainly used for scanning and checking vulnerabilities in open-source libraries and products. It is used in continuous integration and deployment pipelines, IDEs, and in the software development pipeline for automated quality assurance. It provides software composition analysis for application security and helps customers embrace open-source development while ensuring clean code in their environment. It is used for scanning containers, binary artifacts, and third-party libraries for vulnerabilities and security issues. It can be deployed on-prem or in the cloud, and is used by development companies and staff providers with large teams of developers.

As cybersecurity attacks are on the rise, organizations are at constant risk for data breaches. Managing your software supply chain gets trickier as your organization grows, leaving many vulnerabilities exposed. With easily accessible source code that can be modified and shared freely, open-source monitoring gives users complete transparency. A community of professionals can inspect open-source code to ensure fewer bugs, and any open-source dependency vulnerability will be detected and fixed rapidly. Users can use open-source security monitoring to avoid attacks through automatic detection of potential threats and rectification immediately and automatically.

Note: If you are willing to accept downtime, you can simply take all the brokers down, update the code and start all of them. They will start with the new protocol by default.Note: Bumping the protocol version and restarting can be done any time after the brokers were upgraded. It does not have to be immediately after.Notable changes in 0.10.1.2

  • New configuration parameter upgrade.from added that allows rolling bounce upgrade from version 0.10.0.x
Potential breaking changes in 0.10.1.0
  • The log retention time is no longer based on last modified time of the log segments. Instead it will be based on the largest timestamp of the messages in a log segment.
  • The log rolling time is no longer depending on log segment create time. Instead it is now based on the timestamp in the messages. More specifically. if the timestamp of the first message in the segment is T, the log will be rolled out when a new message has a timestamp greater than or equal to T + log.roll.ms
  • The open file handlers of 0.10.0 will increase by 33% because of the addition of time index files for each segment.
  • The time index and offset index share the same index size configuration. Since each time index entry is 1.5x the size of offset index entry. User may need to increase log.index.size.max.bytes to avoid potential frequent log rolling.
  • Due to the increased number of index files, on some brokers with large amount the log segments (e.g. >15K), the log loading process during the broker startup could be longer. Based on our experiment, setting the num.recovery.threads.per.data.dir to one may reduce the log loading time.
Upgrading a 0.10.0 Kafka Streams Application
  • Upgrading your Streams application from 0.10.0 to 0.10.1 does require a broker upgrade because a Kafka Streams 0.10.1 application can only connect to 0.10.1 brokers.
  • There are couple of API changes, that are not backward compatible (cf. Streams API changes in 0.10.1 for more details). Thus, you need to update and recompile your code. Just swapping the Kafka Streams library jar file will not work and will break your application.
  • Upgrading from 0.10.0.x to 0.10.1.2 requires two rolling bounces with config upgrade.from="0.10.0" set for first upgrade phase (cf. KIP-268). As an alternative, an offline upgrade is also possible.
    • prepare your application instances for a rolling bounce and make sure that config upgrade.from is set to "0.10.0" for new version 0.10.1.2
    • bounce each instance of your application once
    • prepare your newly deployed 0.10.1.2 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
    • bounce each instance of your application once more to complete the upgrade
  • Upgrading from 0.10.0.x to 0.10.1.0 or 0.10.1.1 requires an offline upgrade (rolling bounce upgrade is not supported)
    • stop all old (0.10.0.x) application instances
    • update your code and swap old code and jar file with new code and new jar file
    • restart all new (0.10.1.0 or 0.10.1.1) application instances
Notable changes in 0.10.1.0
  • The new Java consumer is no longer in beta and we recommend it for all new development. The old Scala consumers are still supported, but they will be deprecated in the next release and will be removed in a future major release.
  • The --new-consumer/--new.consumer switch is no longer required to use tools like MirrorMaker and the Console Consumer with the new consumer; one simply needs to pass a Kafka broker to connect to instead of the ZooKeeper ensemble. In addition, usage of the Console Consumer with the old consumer has been deprecated and it will be removed in a future major release.
  • Kafka clusters can now be uniquely identified by a cluster id. It will be automatically generated when a broker is upgraded to 0.10.1.0. The cluster id is available via the kafka.server:type=KafkaServer,name=ClusterId metric and it is part of the Metadata response. Serializers, client interceptors and metric reporters can receive the cluster id by implementing the ClusterResourceListener interface.
  • The BrokerState "RunningAsController" (value 4) has been removed. Due to a bug, a broker would only be in this state briefly before transitioning out of it and hence the impact of the removal should be minimal. The recommended way to detect if a given broker is the controller is via the kafka.controller:type=KafkaController,name=ActiveControllerCount metric.
  • The new Java Consumer now allows users to search offsets by timestamp on partitions.
  • The new Java Consumer now supports heartbeating from a background thread. There is a new configuration max.poll.interval.ms which controls the maximum time between poll invocations before the consumer will proactively leave the group (5 minutes by default). The value of the configuration request.timeout.ms (default to 30 seconds) must always be smaller than max.poll.interval.ms(default to 5 minutes), since that is the maximum time that a JoinGroup request can block on the server while the consumer is rebalance. Finally, the default value of session.timeout.ms has been adjusted down to 10 seconds, and the default value of max.poll.records has been changed to 500.
  • When using an Authorizer and a user doesn't have Describe authorization on a topic, the broker will no longer return TOPIC_AUTHORIZATION_FAILED errors to requests since this leaks topic names. Instead, the UNKNOWN_TOPIC_OR_PARTITION error code will be returned. This may cause unexpected timeouts or delays when using the producer and consumer since Kafka clients will typically retry automatically on unknown topic errors. You should consult the client logs if you suspect this could be happening.
  • Fetch responses have a size limit by default (50 MB for consumers and 10 MB for replication). The existing per partition limits also apply (1 MB for consumers and replication). Note that neither of these limits is an absolute maximum as explained in the next point.
  • Consumers and replicas can make progress if a message larger than the response/partition size limit is found. More concretely, if the first message in the first non-empty partition of the fetch is larger than either or both limits, the message will still be returned.
  • Overloaded constructors were added to kafka.api.FetchRequest and kafka.javaapi.FetchRequest to allow the caller to specify the order of the partitions (since order is significant in v3). The previously existing constructors were deprecated and the partitions are shuffled before the request is sent to avoid starvation issues.
New Protocol Versions
  • ListOffsetRequest v1 supports accurate offset search based on timestamps.
  • MetadataResponse v2 introduces a new field: "cluster_id".
  • FetchRequest v3 supports limiting the response size (in addition to the existing per partition limit), it returns messages bigger than the limits if required to make progress and the order of partitions in the request is now significant.
  • JoinGroup v1 introduces a new field: "rebalance_timeout".
Upgrading from 0.8.x or 0.9.x to 0.10.0.00.10.0.0 has potential breaking changes (please review before upgrading) and possible performance impact following the upgrade. By following the recommended rolling upgrade plan below, you guarantee no downtime and no performance impact during and following the upgrade.
Note: Because new protocols are introduced, it is important to upgrade your Kafka clusters before upgrading your clients.

aa06259810
Reply all
Reply to author
Forward
0 new messages