DOS 515 - Week 3 Discussion
Writing Prompt
Initial Post: Networks and Application Delivery
Networks are systems of cabling and electronics that allow computer systems to communicate with each other. The most popular network architecture currently in use for organizational networks is Ethernet, which can be deployed in several different topologies, but is usually set up with a star topology.1 TCP/IP is the most common network protocol for communications. Network architectures are designed with discrete layers (physical, data, network, transport, session, presentation, and application) that are abstracted from each other. This means one layer does not have to have knowledge of how the layer above or below is doing its job; only that it WILL do its job if requested. This abstraction means a TCP/IP connection can be carried by an ethernet network, an Asynchronous Transfer Mode (ATM) network, an ADSL network, or any other type of connection. The applications that rely on TCP/IP don't have to have any knowledge of what kind of network they are running to work correctly. In fact, applications may not even have to have knowledge of how TCP/IP works; they may instead rely on a higher-level protocol that manages the connection to the TCP/IP layer below.
The type of interactions between computers can be peer to peer, in which no single computer has any particular authority over another. This creates an ad-hoc setup where data must be carefully managed and tracked. A more robust method is for one computer to be designated as a central computer, which is usually called a server, and all other computers in the group will coordinate as clients of that server, allowing the server to manage their data. Besides data, servers can also be responsible for coordinating, distributing, or even providing applications.
Thick clients are computers that run applications locally, using the resources of the host machine. This reduces network traffic because the application is already on the machine, and the host machine only needs to use the network if additional data needs to be loaded from another computer. A big advantage of thick clients is perceived application speed, because there is no lag between a calculation being performed and the user seeing the result of that calculation. A big disadvantage of thick clients is cost, because the host machines need to be upgraded to the point where they can run applications with adequate performance. There may also be additional licensing costs to deploy software to many computers. There is also overhead associated with maintenance of the host machines and keeping the software up to date, since every host is independent and has to be updated separately.
Thin clients are computers that delegate application storage and processing to centralized servers. A user who is accessing a thin client application interacts with the application in what feels like a normal fashion on their local computer, but what they are actually seeing is a visual representation of an application that is actually running on a different computer. The local host computer only has to have enough power to run the thin client access program to connect to the application server. A big advantage of a thin client setup is that individual client computers do not need to be particularly powerful, since the heavy lifting CPU tasks are performed on another computer. It is also possible for thin client applications to access high-speed data storage systems if the server is configured to use a high bandwidth connection to that storage device. This is particularly useful if the client machine's network connection is slow, because the only data that needs to travel across the network is a stream of visual updates showing the current state of the application, as well as user inputs to the application. Running the application on a central server makes maintenance easy because the application only needs to be updated in one place, and the next time a client connects, they will see the new version. While thin clients do not require high speed network connections, they perform much better if they have a fast connection. Low latency is more important than throughput, especially for tasks that require fine motor control, such as contouring. Applications that require rapid screen updates, such as video, will need high throughput as well as low latency. Another downside of thin clients is that there is always network traffic between client and servers, so multiple connections may cause network congestion if the network is not designed to support this load.
WAN solutions are typically delivered in the form of web-based forms and interfaces. The client machine only needs to be able to run a web browser, and all of the data storage and processing happens on the server. This is an extremely flexible model because there are a variety of ways to make web access secure even on public networks, which means users may be able to access these apps without special accommodations like VPN. The web-based delivery also means just about any client device can be used to connect. Web-based forms are growing in complexity as web technologies like HTML5 and JavaScript begin to blur the line between server-based and client-based processing. The servers for web-delivered applications are typically set up and maintained by the in-house IT organization, so processing and storage loads can be set up in any way that fits the organization's needs. No bandwidth is required in a web-based application unless the user is actively submitting or requesting data, or if either the client or server determines that a screen state update is appropriate. Because of this, network traffic comes in bursts, and data payloads tend to be small because only the application state needs to be transmitted. Data is kept and processed on the server.
However an application is delivered, it will likely need to interact with another application at some point. This usually involves one application sending data to another in a structured fasion that the receiving application can accept. In radiation oncology, treatment-related data such as imaging and plan files frequently need to be sent from one system to another to take advange of each system's special capabilities. The DICOM (Digital Image COmmunications in Medicine) standard defines both a file storage scheme and a network transfer protocol. This dual role forms Service Object Pair (SOP) classes that define a function and a type of data that that function can be applied to. The original DICOM standard has been extended to encompass the unique needs of RT, such as the ability to store and forward structure sets, doses, plans, and other treatment data. When one DICOM system wants to interact with another, the initiating party, known as a Service Class User (SCU) will try to negotiate a connection with a Service Class Provider (SCP) by asking whether it support a particular SOP class. If the two systems are able to negotiate a connection, the appropriate data can then be transmitted.
As mentioned, DICOM can include more than just imaging data. Treatment planning systems can generate plan data, which can then be sent to a Record & Verify system, which is a type of Electronic Treatment Record. Some commercial R&V systems are busily expanding the scope of what they do to include tasks such as scheduling, billing, workflow tracking, management reporting, and others. This moves them towards the idea of being a more encompassing system called a Treatment Management Information System (TMIS).
At SCCA Proton Therapy, we use a variety of different applications delivered through every one of the interfaces described here. Some individual applications are actually delivered in multiple ways, depending on the needs of a user at any given moment. For instance, our fusion and contouring application MIM is deployed as a thick client, as a pseudo-thin client through computer virtualization, and as a thin client through Citrix. People who need real-time access to the application for tasks like contouring will access it via a thick client so that they are not performance-bound by network latency. People who create reports or load data do not need fast access to the app, so they will use virtual computer terminals, which run an entire operating system on a remote server. As far as the virtual operating system knows, the app is local, but all of the processing is happening on a server rather than the local computer. People who are offsite, working from home or from an affiliated partner institution can log in to MIM using a Citrix client over the public internet. The Citrix client maintains the security of the connection, and no data ever leaves our facility because users are only seeing screen updates on their local machines, and not actually transmitting the data to their workstations. This is the slowest method, but it is also the most flexible. MIM communicates with our treatment planning systems XiO (thin client) and RayStation (thin client), which then communicate with MOSAIQ (thin client). Since MOSAIQ is not a fully featured TMIS, we supplement its capabilities with IMS (WAN solution) to track the planning process.
Within our local network, we have a switched star topology ethernet network, but our edge routers also have connections to several partner institutions either through dedicated fiber lines or VPN tunnels. Several individual servers also have point to point VPNs that let them talk to counterparts at other institutions. All of this complexity requires three full time IT administrators.
- Dahl R, Herman M. Informatics Systems Overview [PDF slideshow]. Rochester, MN: Mayo Foundation.
Academic Courses > DOS 515 > Networks and Application Delivery
|
Written October 29, 2014
First Semester, Pre-Internship |