Application Centric Management of Infrastructure Resources

Cloud Application Management

Subscribe to Cloud Application Management: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Cloud Application Management: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Cloud App Mgmt. Authors: Elizabeth White, Yeshim Deniz, Harry Trott, Amit Kumar, AppNeta Blog

Related Topics: Cloud Computing, CA Journal, Microservices Journal, Cloud Data Analytics, Cloud Application Management

Article

Rightsizing Your Risk Architecture

A comparison of risk management and architectures that satisfy the demands and requirements of enterprises today

In today's evolving corporate ecosystem, managing risk has become a core focus and function of many enterprises. Mitigating consequences such as loss of revenue, unfavorable publicity and loss of market share, among others, are of utmost importance. Risk applications vary in a variety ways and the underlying technology is also quite diverse. This article will compare and contrast aspects of risk management and suggest architectures (from cloud-based solutions to high-performance solutions) that satisfy the demands and requirements of enterprises today. We will start with a review of "Risk Management," followed by a look at the term "architecture." Then we will review two specific use cases to illustrate the thought process involved in mapping risk management requirements to appropriate architectures.

Risk Management Reviewed
Risk Management is the process of identifying, classifying, and choosing strategies to deal with uncertainties in relation to the business. Given the rate of change in the world today, uncertainties abound. Moreover, they are constantly changing, making risk management a real-time, continuous and ongoing exercise. For an organization committed to achieving an integrated risk management environment, establishing a risk-aware culture is paramount. For example, meeting financial goals can be managed by identifying events that reduce revenue (negative events) as well as opportunities to increase revenue (positive events). This would be followed with prioritizing those events in terms of likelihood and impact, and then choosing appropriate strategies to deal with the risks. Strategies typically include risk avoidance (i.e., eliminating the possibility of an identified event), risk mitigation (i.e., reducing the probability of a risk event), risk deflection (e.g., buy insurance or stock options), and risk acceptance. Likewise, compliance management (e.g., Sarbanes Oxley, ISO 9000 - Quality and ISO 27001 - Security) can be couched in similar risk management terms. For example, in the event the organization does not perform its due diligence to comply with regulatory or industry standard guidelines, the organization may be fined for legal violations, the corporate officers may be subject to imprisonment or customers may be lost. As you can see, every aspect of managing a business can and should be considered in terms of risk management.

Architecture Reviewed
To understand what computer architecture is all about, let's consider building architecture. It is the job of a structural architect to define a structure for a purpose. An architect must first consider the desired use for a building as well as other limiting contextual factors such as desired construction cost, time, and quality (e.g., the building must withstand winds of 150 miles per hour). Next, using knowledge of construction materials and technologies as well as a variety of existing building architectures that can potentially be used as a template, they perform a creative process to suggest possible building designs along with their associated tradeoffs. Finally, they collaborate with the building owners to choose the design considered most appropriate given the scenario. This joint business-technical approach to systems architecture has been referred to a "total architecture," and the process has been referred to as "Total Architecture Synthesis (TAS)." In the immortal words of Paul Brown, the author of Implementing SOA: Total Architecture in Practice, "Total architecture is not a choice. It's a concession to reality. Attempts to organize business processes, people, information, and systems independently result in structures that do not fit together particularly well. The development process becomes inefficient, the focus on the ultimate business objective gets lost, and it becomes increasingly difficult to respond to changing opportunities and pressures. The only real choice you have lies in how to address the total architecture."[1]

Often, the right architecture reflects the advice of Albert Einstein: "Make everything as simple as possible, but not simpler."[2] In other words, the right architecture satisfies all of the business and contextual requirements (unless some are negotiated away) in the most efficient manner. We architects call this combination of effectiveness (it does the job) and efficiency (it uses the least resources) as "elegance."

Contrast this with common expectations for risk management computer systems architecture. I am often asked to document the right risk management reference architecture. That request reflects a naiveté and any such reference architecture falls into Einstein's category of "too simple." In reality, there are levels of architecture and many potential reference architectures for risk management systems. The process for determining the appropriate one, as in the case of structural architecture, involves the same steps of discovery, analysis, comparison, and collaboration culminating in the "elegant" design choice.

The remainder of this article will illustrate this process using two specific risk management scenarios. The first use case will reflect a generalized compliance management (e.g., SOX, ISO 9000 and ISO 27001) scenario. The second use case will illustrate a scenario at the other end of the risk management spectrum - managing risk for an algorithmic trading portfolio. Each scenario will include standard functional requirements as well as illustrative contextual limiting factors to be considered. Then we will review some options with associated tradeoffs in order to arrive at an "elegant" design.

Generalized Compliance Management
Compliance management typically involves defining control objectives (what do we have to control to satisfy regulatory or other requirements?), controls (what organization, policies, procedures, and tasks must be enacted to facilitate compliance?), monitoring (are our controls effective and efficient?) and reporting (we need to prove compliance to external entities - auditors and regulators).

Consequently, compliance management tends to be well defined and somewhat standardized. One standard framework is "Committee of Sponsoring Organizations" (COSO). A standard library of objectives is "Control Objectives for Information and related Technology" (COBIT). And the "Open Compliance and Ethics Group" (OCEG) "GRC Capability Model." Generalized compliance management processing is characterized by manual data entry, document management, workflow, and reporting. Accordingly, high-performance computing requirements are rare, and capacity requirements tend to increase predictably slowly in a linear manner. Security requirements consist primarily of access control and auditing requirements. For the sake of illustration, we will assume that the client has minimal internal resources to build a custom system and a need to be fully operational in a short time. Finally, the information in the system is confidential, but would not be considered to be secret or top secret. That is, there would be minimal impact if information somehow leaked.

Given this set of requirements and context we consider a number of architectural options.

The first option is to build a homegrown application, either via coding or using an out-of-the box platform (e.g., SharePoint). Typically, a customer with the profile above cannot consider coding a homegrown solution due to the cost and time limitations. While an out-of-the-box platform appears to be optimal on the surface, it is not strange to discover that many changes are required to meet specific requirements.

The second option is to consider the use of commercial-off-the-shelf (COTS) software in the customer's environment. Given the fact that compliance management is somewhat standardized, there is a short time frame for implementation and few available resources, this option is more attractive than building an internal application. The strength of most current COTS solutions is that they are typically highly configurable and extensible via adaptation. The term "adaptation" implies that:

  • Changes are accomplished via visual design or light scripting.
  • The solution remains fully supported by the vendor.
  • The solution remains upgradeable.

The associated challenge of the COTS approach is that internal technical and business staff needs to understand the configuration and adaptation capabilities so that an optimal COTS design can be defined. Fortunately, an elegant method exists to achieve the required level of understanding. That method consists of a joint workshop where customer technical and business staff collaborates with a COTS subject matter expert (SME) to model and define adaptations. We refer to this as an "enablement" workshop. I have observed that customer staff can become knowledgeable and proficient in the use of a COTS solution in about two weeks. A word of warning - it is tempting to try to bypass the initial learning curve in the name of efficiency. But customers are in danger of wasting many months of effort only to discover that the COTS platform provides the desired functionality out-of-the-box. For example, one customer I encountered created a custom website to allow data entry of external information into their COTS system only to discover that the application provided import/export features out-of-the box. In this case, they wasted $100,000 and many months of effort before they discovered the feature. In the case of COTS implementations, ignorance is not bliss.

Before we consider the third option, which uses the phrase "cloud computing," we need to define the context of this phrase. In the most general context, "cloud computing" could represent any architectural option that uses any Internet protocol. That could represent an internal cloud that uses dedicated, high-performance infrastructure. However, the phrase is typically used to indicate a solution that is at least partially outsourced. Outsourcing could involve Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or Software-as-a-Service (SaaS). In this article we will assume that "cloud computing" indicates outsourcing to a service provider that is providing commodity infrastructure, platform, and software potentially on a virtual platform.

The third option to consider is "cloud computing" on dedicated, physical hardware. Given the customer's limited resources and the rapid-time-to-implementation requirement, this option can be quite attractive. With this option, service provider security becomes an issue. Is their environment protected from outside access as well as access from other clients? In such a case, provider certifications might be considered. For example, are they ISO 27001 certified? Likewise, one would have to consider failover and disaster recovery requirements and provider capabilities. In general, remember that the use of a service provider does not relieve you of architectural responsibility. Due diligence is required. Therefore the "Total Architect" must formulate specific Service Level Agreements (SLAs) with the business owners and corresponding Operation Level Agreements (OLAs) with the service provider.

A fourth option consists of cloud computing hosted on a virtual machine. The use of virtual machines can reduce the cost of hardware and associated costs (e.g., cooling) by sharing the physical environment. The tradeoff is that the virtual layer typically adds latency, thus reducing performance. But in this given scenario, we have deemed that performance is not a critical requirement so this is also a viable option and perhaps the "elegant" solution design.

In summary of this business-driven scenario, notice the architectural issues could be classified as business-oriented or service-level oriented. Nowhere did we discuss bits, bytes, chips, networks or other low-level technical concerns. This illustrates that architecture is a multi-layered discipline. The next scenario will demonstrate the addition of the technical architecture layer.

Algorithmic Trading Portfolio Risk Management
This scenario represents the polar opposite of our initial scenario. Specifically, the crux of high-frequency, algorithmic trading is the use of highly proprietary algorithms and high-performance computing to exploit market inefficiencies and thus achieve large volumes of small profits.

Risk management in standard trading portfolio analysis usually looks at changes in the Value at Risk (VaR) for a current portfolio. That metric considers a held trading portfolio and runs it against many models to ascertain a worst case loss amount over a window of time and at a given probability. In the case of high frequency algorithmic trading, positions are only held for minutes to hours. Therefore, VaR is somewhat meaningless. Instead, risk management for algorithmic training is less about the portfolio and more about the algorithm. Potential outcomes for new algorithms must be evaluated before they are enacted. Without getting too much into the specifics, which is the subject of many books, the challenge to managing risk for an algorithmic trading strategies involves managing high volumes of "tick" data (aka market price changes) and exercising models (both historical and simulated) to estimate the risk and reward characteristics of the algorithm. Those models can be statistical and such statistical models should assuming both normality and using Extreme-Value-Theory (EVT).

Consequently, the technical architecture of risk systems deals with tools and techniques to achieve low latency, data management, and parallel processing. The remainder of this article will review those topics and potential delivery architectures. In this scenario, contextual limitations include the need for extreme secrecy around trading algorithms and the intense competition to be the fastest. Since so much money is as stake, firms are willing to invest in resources to facilitate their trading platform and strategy.

While in the previous scenario we are willing to move processing into the cloud, accept reduced performance, and reduce costs using virtualization, the right architecture in this scenario seeks to bring processing units closer together, increase performance and accept increased cost to realize gains in trading revenue while controlling risks.

A number of tools and techniques exist to achieve low latency. One technique is to co-locate an algorithmic trading firm's servers at or near exchange data centers. Using this technique, one client reduced their latency by 21 milliseconds - a huge improvement in trading scenarios where latency is currently measured in microseconds. Most cloud applications use virtualization, so many people assume that association. But this does not have to be the case. That's why I'm explicitly including references to both the cloud and virtualization. Another technique to achieve low latency is to replace Ethernet and standard TCP/IP processing with more efficient communications mechanisms. For example, one might use an Infiniband switched fabric communications link to replace a 10 Gigabit Ethernet interconnect fabric. One estimate is that this change provides 4x lower latency. Another example replaces the traditional TCP/IP socket stack with a more efficient socket stack. Such a stack reduces latency by bypassing the kernel, using Remote Direct Memory Access (RDMA), and zero-copy. This change can reduce latency by 80-90%. A final example places the use of single-core CPUs with multi-core CPUs. In this configuration intra-CPU communications take place over silicon rather than a bus or network. Using this tool, latency for real-time events can be reduced by an order of magnitude, say from 1-4 milliseconds to 1-3 microseconds.

When it comes to data, one estimate is that a single day of tick data (individual price changes for a traded instrument) is equivalent to 30 years of daily observations, depending on how frequently an instrument is traded. Again, there are architectural options for reducing such data and replicating it to various data stores where parallelized models can process the data locally. A key tool for reducing real-time data is the use of Complex Event Processing (CEP). A number of such tools exist. They take large volumes of streaming data and produce an aggregate value over a time window. For example, a CEP tool can take hundreds of ticks and reduce them to a single metric such as the Volume-Weighted Average Price (VWAP) - the ratio of the value traded to total volume traded over a time horizon (e.g., one minute). By reducing hundreds of events into a single event, one can add efficiency thus making a risk management model tractable.

A number of possibilities exist to facilitate parallel processing - from the use of multi-core CPUs to multi-processor CPUs to computing grids. And these options are not orthogonal. For example, one can compute a number of trade scenarios in parallel across a grid of multi-core CPUs. The right parallel architecture is dependent on a number of dimensions. For example, "Does the computational savings gained by having local data justify the cost of data replication?" Or, "Is the cost of a multi-core CPU with specialized cores warranted compared to the use of a commodity CPU?" Or even, "Is the cost of a network communication amortized over the life of a distributed computation?"

As is always the case, the architect must understand the requirements, the context, and be able to elaborate the capability and limitations of each option to choose the "right" architecture. Clearly, in this case, the customer is highly concerned about secrecy so the use of an external provider might be out of the question. Also, because virtualization adds one or more layers with corresponding latency, the use of a virtual environment might also be unacceptable. Finally, the wish to outperform competitors implies that any standardized COTS solution out of the question.

In summary, this article has demonstrated that the "right" risk architecture is a function of the scenario. We have illustrated that there are various levels of architecture (business, context, delivery, technical, data, parallelism, etc.) to be considered depending on the scenario. But when a concrete scenario is presented, the "Total Architecture" approach can be applied and the "right" architecture becomes clear and decisions become obvious.

References

  1. Paul C. Brown, Implementing SOA: Total Architecture in Practice (Upper Saddle River, NJ: Addison-Wesley Professional, 09-APR-2008), 5.
  2. Albert Einstein. (n.d.). BrainyQuote.com. Retrieved September 6, 2010, from BrainyQuote.com Web site

More Stories By Frank Bucalo

Frank Bucalo is an enterprise architect with CA Technologies specializing in risk management systems. Prior to his time at CA Technologies, he was an architect and a consultant with banking and brokerage clients. He is a member of the Global Association of Risk Professionals (GARP) and an ITIL Service Manager. Mr. Bucalo represents the new breed of “Total Architect” – knowledgeable and experienced across business, technology, and technology management domains.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.