Introducing The New Mac Cloud Agent

0 views
Skip to first unread message
Message has been deleted

Fanny Lococo

unread,
Jul 11, 2024, 8:35:03 AM7/11/24
to alennyumen

In my previous blog post, I wrote about the ways our new Windows Cloud Agent addresses the identity and access management challenges that come with the shift to the cloud-based infrastructure and distributed workforce. This week, I am excited to introduce you to the new Idaptive Mac Cloud Agent extends the benefits of cloud-centric identity management to organizations with large deployments of macOS devices. The new Idaptive Mac Cloud Agent makes it easy for you to deploy Mac devices to remote employees, ensure that devices have the right set of security policies, and are protected with Multi-Factor Authentication.

The Idaptive Mac Cloud Agent connects all of your Mac endpoints to the Idaptive platform so that your local and remote users can seamlessly sign in to their devices with their Active Directory or Idaptive Cloud Directory credentials. Previously, the process of providing Mac devices to remote workers was challenging. The devices had to be set up prior to being shipped and, once received, remote users had to establish a direct connection to the corporate network via VPN to complete the enrollment process. With the new Mac Cloud Agent, the enrollment process is drastically simplified. Once the agent is deployed on the device, users can immediately log in with their existing credentials and complete the enrollment process without being connected to a VPN or corporate network.

Introducing the New Mac Cloud Agent


Download Zip https://jfilte.com/2yVTN3



Offline OTP: With offline one-time-passcodes (OTP), you can sign in to your devices protected by Multi-factor Authentication even when you are not connected to the internet. Offline OTP setup is easily accessible directly in the agent drop-down menu and supports all standard OTP authenticator applications.

In this blog post, I introduced the new Mac Cloud Agent and explained its key features, benefits, and use cases. With Mac Cloud Agent, you can centrally enroll and manage mac endpoints, secure endpoints with Multi-Factor Authentication, and enforce device-level policies. To start using Mac Cloud Agent, simply log in to your Idaptive tenant and download the agent on to your Mac device.

Together, the Next-Gen Access Cloud platform and Idaptive Windows Cloud Agent make it easy for you to adopt cloud technology, support your existing IT applications deployed on-premises, and onboard remote employees.

The IT workloads, such as applications and websites, are rapidly moving from on-premises environments and hosting facilities to the cloud. According to a recent estimate, more than 80% of enterprise workloads will be in the cloud by 2020. Along with embracing the cloud, enterprises are increasingly shifting towards a distributed workforce. It is projected that by 2027, freelance workers will be the majority of the U.S. workforce. The combination of these factors requires IT organizations to rethink their approach to identity and access management.

Over the last 20 years, Microsoft Active Directory (AD) has been the de facto technology for centralized user management and authentication. Over 95% of enterprises used AD identity management capabilities in on-premises environments. However, AD is not suitable for addressing the identity and access challenges that come with the shift to the cloud-based infrastructure and distributed workforce. Organizations need to augment their on-premise AD deployments with cloud-centric identity management solutions, such as the Idaptive Next-Gen Access Cloud, that are scalable, can support remote users, and easily integrate with modern applications and protocols.

To this end, we are excited to introduce the new Idaptive Windows Cloud Agent as part of our 19.6 release. Together, the Next-Gen Access Cloud platform and Idaptive Windows Cloud Agent make it easy for you to adopt cloud technology, support your existing IT applications deployed on-premises, and onboard remote employees. With the new Windows Cloud Agent, you can easily join your Windows 10 endpoints to the Idaptive platform, enable end-users to log in to their workstations without direct connectivity to AD, protect login with Adaptive Multi-Factor Authentication, and enforce device-level security policies. Additionally, Windows Cloud Agent provides remote end-users a convenient self-service capability to unlock their accounts, reset their passwords, or set up offline one-time passcodes for secondary authentication.

In this blog post, I introduced the new Windows Cloud Agent and explained its key features and benefits. With Windows Cloud Agent, you can enroll and centrally manage Windows 10 devices, protect login with adaptive MFA, and enforce device-level security policies. To start using Windows Cloud Agent, simply login to your Idaptive tenant and download the agent on to your Windows machine.

The newly released platform aims to solve some of these challenges with cloud connected, centrally managed, always up-to-date, lightweight agents that are constantly beaming up security and compliance data points to the platform. This innovative approach greatly simplifies deployment and management of these agents. Legacy enterprise agent solutions require deployment of a large amount of infrastructure to manage these agents. The Qualys Cloud Agent Platform with its simplified deployment significantly reduces the complexity and cost of management of agents when compared to these legacy agent solutions.

The Cloud Agent Platform stores a snapshot, which is security & compliance metadata about the target system collected by the Cloud Agent. This starts off with an initial background upload of the baseline snapshot which is a few Mbytes and is beamed up to the platform. After that only incremental deltas are uploaded in small chunks that are only a few kilobytes in size. Since all of the heavy lifting is done in the cloud, the agent needs minimal footprint and processing on the target systems. All target systems can be scanned (via their snapshots) in the cloud as soon as new vulnerability signatures are released. This means threats can be detected without having to wait for the target system to be online. This creates huge improvements in efficiency and speed over current scanning architectures.

The ability to embed the agent in the master images used to create virtual instances greatly simplifies security and compliance in dynamic elastic environments. When new instances are created it can be complicated to automate the scaling of scanners to address the demand. The agent simplifies checking the posture of these instances since the agent activates itself as soon as the instance is booted, registers itself with the Qualys Cloud Agent Platform and uploads all its information into the platform for analysis.

These systems often live in the shadows, making them prime targets for intruders, so strong security controls and monitoring are absolutely critical. With over 30% of on-premises and cloud assets and services lacking proper inventory, a significant cybersecurity visibility gap exists for organizations that experience attacks targeting unknown, unmanaged, or unauthorized devices. At the same time, they are exceedingly difficult to find. In fact, 43% of organizations spend 80+ hours each month tracking down unknown assets.

CAPS empowers security teams by giving them the ability to effectively eliminate blind spots and provides them with continuous and comprehensive detection of all assets across their IT and OT environments. Better still, CAPS does this all with zero hassle and overhead, leveraging the same trusted agent that Qualys customers rely on today for Vulnerability Management, Detection, and Response (VMDR), Patch Management (PM), File Integrity Monitoring (FIM), Endpoint Detection and Response (EDR), and more.

Complementary Scanner and Agent Asset Detection Identify assets through Qualys CAPS that cannot be actively scanned or monitored with agents. This is often the case with assets like industrial equipment, IoT, and medical devices.

The strong focus of the Prometheus community allowed other open-source projects to grow too to extend the Prometheus deployment model beyond single nodes (e.g. Cortex, Thanos and more). Not mentioning cloud vendors adopting Prometheus' API and data model (e.g. Amazon Managed Prometheus, Google Cloud Managed Prometheus, Grafana Cloud and more). If you are looking for a single reason why the Prometheus project is so successful, it is this: Focusing the monitoring community on what matters.

In this (lengthy) blog post, I would love to introduce a new operational mode of running Prometheus called "Agent". It is built directly into the Prometheus binary. The agent mode disables some of Prometheus' usual features and optimizes the binary for scraping and remote writing to remote locations. Introducing a mode that reduces the number of features enables new usage patters. In this blog post I will explain why it is a game-changer for certain deployments in the CNCF ecosystem. I am super excited about this!

However, the cloud-native world is constantly growing and evolving. With the growth of managed Kubernetes solutions and clusters created on-demand within seconds, we are now finally able to treat clusters as "cattle", not as "pets" (in other words, we care less about individual instances of those). In some cases, solutions do not even have the cluster notion anymore, e.g. kcp, Fargate and other platforms.

The other interesting use case that emerges is the notion of Edge clusters or networks. With industries like telecommunication, automotive and IoT devices adopting cloud-native technologies, we see more and more much smaller clusters with a restricted amount of resources. This is forcing all data (including observability) to be transferred to remote, bigger counterparts as almost nothing can be stored on those remote nodes.

? Scraping across network boundaries can be a challenge if it adds new unknowns in a monitoring pipeline. The local pull model allows Prometheus to know why exactly the metric target has problems and when. Maybe it's down, misconfigured, restarted, too slow to give us metrics (e.g. CPU saturated), not discoverable by service discovery, we don't have credentials to access or just DNS, network, or the whole cluster is down. By putting our scraper outside of the network, we risk losing some of this information by introducing unreliability into scrapes that is unrelated to an individual target. On top of that, we risk losing important visibility completely if the network is temporarily down. Please don't do it. It's not worth it. (:

aa06259810
Reply all
Reply to author
Forward
0 new messages