Computing Essentials 2023 Making It Work For You

0 views
Skip to first unread message

Brian Bezdicek

unread,
Aug 3, 2024, 6:07:00 PM8/3/24
to brawatlide

MEDICAL STUDENTS: You will receive training with HITS during your M1 Orientation week. See the Computing Essentials guide below and the Laptop FAQ for all your technology questions. Watch the HITS Orientation video, which covers helpful computing information.

Many applications will be essential to your daily work. Read on for an overview on connecting to email, file storage and file sharing, chat and video conferencing tools, and productivity and learning apps.

Please note: HITS discourages using removable data storage media (such as USB flash drives) and they should never be used with ePHI or sensitive data. Instead, you must use one of our approved tools for General Needs or Research Focus. Any removable media must use encryption technology, and all data must be encrypted.

Visit the Michigan Medicine Help Center to chat with an agent, find answers to common questions, or submit a ticket for help. For more guidance on remote work settings, see the HITS Tech Guide for Working Remotely.

Edge computing is definitely a thing in today's technical landscape. The market size for edge computing products and services has more than doubled since 2017. And, according to the statistics site, Statista, it's projected to explode by 2025. (See Figure 1, below)

Given the evolution of technology and the growth expected in the coming years, having a basic understanding of edge computing is essential for the modern Enterprise Architect. The purpose of this article is to provide that basic understanding.

In this article, I cover four topics that are fundamental to technology. First, I will provide an introduction to the basic concepts of edge computing. Next, I'll discuss the essential value proposition for edge computing. I'll follow up by describing an emerging pattern in edge computing: The Fog vs. the edge. Finally, I'll look at how the adoption of artificial intelligence from an operational perspective has put edge computing at the forefront of modern architecture design.

Edge computing is a distributed computing pattern. Computing assets on a very wide network are organized so that certain computational and storage devices that are essential to a particular task are positioned close to the physical location where a task is being executed. Computing resources relevant to the task, but not essential to it, are placed in remote locations.

In an edge computing scenario, edge devices such as a video camera or motion detector will have only the amount of computation logic and storage capacity required to do the task at hand. Usually, these edge devices will be very small systems such as Raspberry Pi or networkable appliances that have task-specific logic embedded in dedicated, onboard computers. (See Figure 2, below)

The remote computers, to which the edge devices are connected, tend to be much more powerful and are provisioned to do more complex work. As such, these remote computers usually reside in a data center in the cloud.

While a lot of technology around edge computing is still evolving, the basic concept has been in play for a while in the form of Content Delivery Networks (CDN). A CDN is a network architecture in which content is pushed out to servers closest to the points of consumption, thus reducing latency and providing a high-quality experience to the consumer. (See Figure 3, below.)

For example, a company such as Netflix, which has viewers worldwide, will push content out to servers located at various locations across the globe. When a viewer logs in to Netflix and selects a film to view, the Netflix digital infrastructure's internals determine the point nearest the viewer from which to stream the movie and delivers the content accordingly. The process is hidden from the viewer. The internal mechanism that implements the content delivery network is the Open Connect system developed by Netflix.

A physical analogy to understand the value of edge computing is to imagine the delivery operation of a fictitious national online retailer I'll call Acme Online. Acme Online is a central eCommerce system. It allows anybody in the USA to buy an item from its site and then have Acme Online deliver the purchased item anywhere in the USA. (See Figure 4, below.)

The scenario shown above in Figure 4 has Bobby in Los Angeles buying a gift for his friend Billy in New York City (NYC). Billy is over 3000 miles away from Bobby. Acme Online has several physical warehouses distributed nationwide. One of the warehouses is in Los Angeles. Another one is in NYC. After Bobby makes his purchase, the intelligence in the Acme Online central data center

The efficiency is apparent. Executing the task of delivering the package is best performed from the distribution point nearest the recipient. The closer the delivery point is to the recipient, the less time and resources are required to do the task.

While this is an illustrative example, it might seem a bit trivial. However, there's more going on than is immediately apparent. Further analysis will reveal that there are really two types of computing entities in play in the warehouses. One type, in edge computing parlance, is called the Fog. The other type is the edge. You can think of the warehouse as the Fog and the truck that delivers the gift as the edge. Let's take a look at the difference.

In the early days of edge computing, devices such as video cameras and motion detectors were directly hooked up to a central computing location, typically an in-house data center. As usage grew, however, a problem developed. The computing resources were stretched too thin. Too much data that was hard to process had to travel too far. (A video camera that records at 30 frames per second (fps), sending each frame back to a central server for storage will max out disk I/O and slow the network down in no time at all.)

The Fog is a layer of computing that sits between the central cloud and the edge devices. Going back to the Acme Online analogy described above, the Acme Online architecture puts regional warehouses between the central cloud and the trucks delivering its goods. As mentioned above, the warehouses are the Fog, and the trucks are the edge. Logic is segmented accordingly. A truck has only the intelligence required to interact with the warehouse and do actual package delivery.

On the other hand, the warehouse knows how to receive and store inventory, fulfill orders, and assign orders to a truck. Also, the warehouse knows how to interact with the central data center at Acme Online as well as all the trucks stationed at the warehouse. In other words, the warehouse is the Fog layer that acts as the intermediary between the central data center and the edge.

This physical analogy holds true in a digital infrastructure. A high volume edge architecture puts a layer of computing between the edge devices and the cloud to improve the system's performance overall. Implementing a Fog layer between edge devices and the central cloud improves system security as well.

One emerging architectural style is to place the Fog layer of a distributed application intended to consume and process confidential information according to governance rules of the locale in a private cloud. One example is an architecture where a bank's Automated Tellers (ATMs) connect to the institution's private network. In this example, the ATMs are the edge devices, and the bank's private network is the Fog. The Fog handles authentication and verification relevant to simple transactions. However, when more complex analytic computation that requires enormous computing resources is needed, that work is passed off to a public cloud in a secure manner. Typically this type of intense computation is related to machine learning that powers artificial intelligence. In fact, the public cloud/private cloud (Fog)/edge segmentation found in edge architecture is well suited for applications that rely on a robust AI infrastructure.

These days the modern cell phone has made us accustomed to using AI in our day-to-day lives, even if it's behind the scenes. Technologies such as Google Lens have image recognition built right into its Android phones. You can point the phone's camera at a bottle of your favorite brand of ketchup, and the application will go out to the Internet and find the store nearest you where you can buy that ketchup. It's a pretty amazing feat, especially when you consider that the cell phone's evolution is such that it actually has the computing power to do the initial image recognition. It wasn't always that way. In the past, this level of computing could only take place on very powerful computers.

Taking a photo and sending it in an email has been a cell phone feature for a while. As a result, billions of pet photos have become a permanent fixture on the Internet. In the past, a cell phone could take a picture of a dog, but it had no idea that the image was that of a dog. That work needed to be done by more powerful machines that understand what a dog looks like. This process of image identification is called modeling.

The way modeling works is that a computer program is fed a very large number of images that describe a thing of interest, in this case, a dog. The program has the logic to determine a generic pattern that describes the item of interest. In order words, after feeding a program a few million pictures of different dogs, eventually, it determines the common characteristics of dogs and is able to identify one in a random image. This general description is called a model.

It takes a lot of computing power to create a model but less power to use one. Initially, in the world of AI, both defining a model and using one was done in a data center. When it came to figuring out if a photo was that of a dog, a cell phone was nothing more than a dumb terminal, as shown in Figure 5 below.

However, as cell phones became more powerful, they developed the capability to use models created in the cloud. Today, cell phones, which are inherently edge devices, download a model of a dog that is then used by intelligence in the cell phone to determine that a digital image is indeed that of a dog. The benefit is that more processing occurs on the edge device. In addition, the edge device does not need to have a continuous connection to the data center in the cloud. If, for some reason, the cell phone goes into an underground tunnel with no connectivity, it can still identify a dog in a photo. (See Figure 6, below)

c80f0f1006
Reply all
Reply to author
Forward
0 new messages