Oracle Legacy Modernization - approaches to legacy modernization


Oracle Modernization Solutions

Oracle Modernization Solutions

This excerpt from Oracle Modernization Solutions by Thiru Thangarathinam, is printed with permission from Packt Publishing, Copyright 2007.

Introduction to Legacy Modernization

A lot has been written on legacy modernization in the past few years. Most of the books, analyst reports, and white papers discuss at a high level why one should modernize and theorize, and the different approaches to, and possible outcomes of modernization. Instead of going into modernization theory, we will quickly dive into the details of two very well known modernization approaches: SOA Enablement and Re-architect. There will be a specifi c focus on modernization to Open Systems taking advantage of the Oracle technology stack, which can provide mainframe quality of service while delivering the agility of a modern architecture simultaneously. We will uncover a specifi c set of tools and show the process from end-to-end.

We will take an agnostic perspective of hardware and operating systems as most of these have proven to be capable of handling the reliability, scalability, and performance of a mainframe system. In fact, at the time of this writing, the current records for transactions per second have been delivered with Oracle on Intel-based servers.

For most organizations, the ideal solution would be to re-architect everything since re-architecting yields the most modernized environment—the environment that makes the best use of modern technology, is the most agile when it comes to change, and relies no longer on legacy skill sets.

Although such a big bang scenario is technically feasible, in reality, it is diffi cult and risky for any organization to accomplish this in a single re-architecting step—no matter how desirable the outcome. Most organizations would view such a big bang approach as putting their entire organization at risk. As a result, they take several intermediate steps. The following chapters show several options that could be considered in order to break down the modernization problem into byte-sized chunks—all the while delivering the fi nal goal of achieving a process-driven SOA architecture based on J2EE. Additionally, these intermediate steps of SOA enablement will yield measurable ROI and benefit.

What We Won't Cover

Before we begin our path to modernization, let's take some time to talk about the things that we will not cover in this book. The main focus of this book is a practical application of how to modernize a legacy application using two specifi c techniques. We won't cover topics such as marketplace, methodologies, and estimation techniques.

Methodology and Estimation

Countless books have been written on application development methodology. Every system integrator/programming shop within a large company or technology group has a general development methodology, be it waterfall, agile, or eXtreme programming. The techniques in this book can fi t any given protocol.

Estimation is a bit different and varies from system to system and with the choice of the modernization option. It can depend upon factors such as target language, tools, and the level of automation you are employing. If someone tries to sell you a solution based on the line of code or function point counts and complexity, you can pretty much throw that out of the window. Function point analysis is a great tool for understanding the complexity of the source code and can drive estimation, but there is certainly no general formula for how long a modernization will take, or how much it will cost. Another book can be written on this subject.

The Modernization Marketplace and Why Modernize

If you are reading this book, then we will assume that application modernization is a necessity for you. You are looking at "how to modernize" rather than "why modernize". Further, much market research has been done on this subject. Countless presentations, white papers, and events are actively being conducted on this subject.

The largest and best of breed systems integrators of the world have practices built solely around the modernization market. There are several reasons that drive a legacy modernization project. High costs, lack of agility, an aging technology workforce are just some of the reasons for modernization. Sometimes the motivation to modernize is driven from the business, at other times it is pure technology play. The reasons are many, and the fi nal decision to embark on this effort depends on each organization. Again, much material is being developed on this subject and is not the topic of this book.

The Oracle Modernization Alliance (OMA) is an effort by Oracle to bring together the best of breed partners and products to enable modernization to open systems. This is truly an emerging fi eld both for companies considering modernization, and for the companies working to provide those technologies. The OMA is a resource to help customers identify the best path to modernization. The following is a table of some key resources that you can have access to from Oracle around modernization. In addition, we will list some key alliances that Oracle has in the modernization space. Here, you will fi nd abundant market research, white papers, and links to key contacts for getting engaged on a modernization initiative.

Oracle Modernization Alliance resources are as follows:

Oracle works with many global systems integrators who focus on legacy modernization. The following is a list of the current system integrators that are apart of the Oracle Modernization Alliance.

OMA Member System Integrator Information Link on Modernization
Accenture Application_Renewal
Computer Sciences Corporation (CSC) offerings/938.shtml
Datamatics Limited
Electronic Data Systems (EDS) legacy_modernization.aspx
Hewlett-Packard (HP)
Hexaware Technologies
Oracle Financial Services Consulting
Perot Systems EnterpriseConsulting/Oracle
TaTa Consulting Services (TCS)
Unisys Corporation application__services

Deep Dive on Approaches

There are fi ve primary options for modernization, and all are worthy of deep exploration. In the next section, we will review each of these options at a high level. However, this book is a deep technical dive on two approaches for Legacy Modernization, namely SOA enablement and re-architecture. These two options are selected for two reasons. First, it gives a modernization option for staying on the mainframe (SOA enablement) and moving off the mainframe (re-architecture). Second, many organizations around the world are engaged on one of these two paths, or both in many cases. Although either modernization option can be chosen independently, together they provide a smooth and measured path to a modern environment without the risk of a big bang approach. We also cover a rehosting-based approach to modernization, which minimizes the upfront risk and supports SOA enablement and selective re-architecture during or following the automated platform migration. We will cover more of this later.

Overview of the Modernization Options

There are fi ve primary approaches to legacy modernization:

  • Re-architecting to a new environment
  • SOA integration and enablement
  • Replatforming through re-hosting and automated migration
  • Replacement with COTS solutions
  • Data Modernization

Other organizations may have different nomenclature for what they call each type of modernization, but any of these options can generally fi t into one of these fi ve categories. Each of the options can be carried out in concert with the others, or as a standalone effort. They are not mutually exclusive endeavors. Further, in a large modernization project, multiple approaches are often used for parts of the larger modernization initiative. The right mix of approaches is determined by the business needs driving the modernization, organization's risk tolerance and time constraints, the nature of the source environment and legacy applications. Where the applications no longer meet business needs and require signifi cant changes, re-architecture might be the best way forward. On the other hand, for very large applications that mostly meet the business needs, SOA enablement or re-platforming might be lower risk options.

You will notice that the fi rst thing we talk about in this section—the Legacy Understanding phase—isn't listed as one of the modernization options. It is mentioned at this stage because it is a critical step that is done as a precursor to any option your organization chooses.

Legacy Understanding

Once we have identifi ed our business drivers and the fi rst steps in this process, we must understand what we have before we go ahead and modernize it. Legacy environments are very complex and quite often have little or no current documentation. This introduces a concept of analysis and discovery that is valuable for any modernization technique.

Application Portfolio Analysis (APA)

In order to make use of any modernization approach, the fi rst step an organization must take is to carry out an APA of the current applications and their environment. This process has many names. You may hear terms such as Legacy Understanding, Application Re-learn, or Portfolio Understanding. All these activities provide a clear view of the current state of the computing environment. This process equips the organization with the information that it needs to identify the best areas for modernization. For example, this process can reveal process fl ows, data fl ows, how screens interact with transactions and programs, program complexity and maintainability metrics and can even generate pseudocode to re-document candidate business rules. Additionally, the physical repositories that are created as a result of the analysis can be used in the next stages of modernization, be it in SOA enablement, re-architecture, or re-platforming. Efforts are currently underway by the Object Management Group (OMG) to create a standard method to exchange this data between applications. The following screenshot shows the Legacy Portfolio Analysis:

APA Macroanalysis

T he fi rst form of APA analysis is a very high-level abstract view of the application environment. This level of analytics looks at the application in the context of the overall IT organization. Systems information is collected at a very high level. The key here is to understand which applications exist, how they interact, and what the identifi ed value of the desired function is. With this type of analysis, organizations can manage overall modernization strategies and identify key applications that are good candidates for SOA integration, re-architecture, or re-platforming versus a replacement with Commercial Off-the-Shelf (COTS) applications. Data structures, program code, and technical characteristics are not analyzed here.

The following macro-level process fl ow diagram was automatically generated from Relativity Technologies Modernization Workbench tool. Using this, the user can automatically get a view of the screen fl ows within a COBOL application. This is used to help identify candidate areas for modernization, areas of complexity, transfer of knowledge, or legacy system documentation. The key thing about these types of reports is that they are dynamic and automatically generated.

The previous fl ow diagram illustrates some interesting points about the system that can be understood quickly by the analyst. Remember, this type of diagram is generated automatically, and can provide instant insight into the system with no prior knowledge. For example, we now have some basic information such as:

  • MENSAT1.MENMAP1 is the main driver and is most likely a menu program.
  • There are four called programs.
  • Two programs have database interfaces.

This is a simplistic view, but if you can imagine hundreds of programs in a visual perspective, we can quickly identify clusters of complexity, defi ne potential subsystems, and do much more, all from an automated tool with visual navigation and powerful cross-referencing capabilities. This type of tool can also help to re-document existing legacy assets.

APA Microanalysis

Th e second type of portfolio analysis is APA microanalysis. This examines applications at the program level. This level of analysis can be used to understand things like program logic or candidate business rules for enablement, or business rule transformation. This process will also reveal things such as code complexity, data exchange schemas, and specifi c interaction within a screen fl ow. These are all critical when considering SOA integration, re-architecture, or a re-platforming project.

The following are more models generated from the Relativity Modernization Technologies Workbench tool. The fi rst is a COBOL transaction taken from a COBOL process. We are able to take a low-level view of a business rule slice taken from a COBOL program, and understand how this process fl ows. The particulars of this fl ow map diagram are not important; rather, this model can be automatically generated and is dynamic based on the current state of the code.

The second model shows how a COBOL program interacts with a screen conversation. In this example, we are able to look at specifi c paragraphs within a particular program. We can identify specifi c CICS transaction and understand which paragraphs (or subroutines) are interacting with the database. The models can be used to further refi ne our drive for a more re-architected system, help us identify business rules and help us populate a rules engine, which we will see in the later chapters.

This example is just another example of a COBOL program that interacts with screens—shown in gray, and the paragraphs that execute CICS transactions—shown in white. So with these color coded boxes, we can quickly identify paragraphs, screens, databases, and CICS transactions.

App lication Portfolio Management (APM)

APA is only a part of IT approach known as Application Portfolio Management. While APA analysis is critical for any modernization project, APM provides guideposts on how to combine the APA results, business assessment of the applications' strategic value and future needs, and IT infrastructure directions to come up with a long term application portfolio strategy and related technology targets to support it. It is often said that you cannot modernize that which you do not know. With APM, you can effectively manage change within an organization, understand the impact of change, and also manage its compliance.

APM is a constant process, be it part of a modernization project or an organization's portfolio management and change control strategy. All applications are in a constant state of change. During any modernization, things are always in a state of fl ux. In a modernization project, legacy code is changed, new development is done (often in parallel), and data schemas are changed. When looking into APM tool offerings, consider products that can provide facilities to capture these kinds of changes in information and provide an active repository, rather than a static view. Ideally, these tools must adhere to emerging technical standards, like those being pioneered by the OMG.


Re-a rchitecting is based on the concept that all legacy applications contain invaluable business logic and data relevant to the business, and these assets should be leveraged in the new system, rather than throwing it all out to rebuild from scratch. Since the new modern IT environment elevates a lot of this logic above the code using declarative models supported by BPM tools, ESBs, Business Rules engines, Data integration and access solutions, some of the original technical code can be replaced by these middleware tools to achieve greater agility. The following screenshot shows an example of a system after re-architecture.

The previous example shows what a system would look like, from a higher level, after re-architecture. We see that this isn't a simple transformation of one code base to another in a one-to-one format. It is also much more than remediation and refactoring of the legacy code to standard java code. It is a system that fully leverages technologies suited for the required task, for example, leveraging Identity Management for security, business rules for core business, and BPEL for process fl ow.

Thus, re-architecting focuses on recovering and reassembling the process relevant to business from a legacy application, while eliminating the technology-specifi c code. Here, we want to capture the value of the business process that is independent of the legacy code base, and move it into a different paradigm. Re-architecting is typically used to handle modernizations that involve changes in architecture, such as the introduction of object orientation and process-driven services.

The advantage that re-architecting has over greenfi eld development is that re-architecting recognizes that there is information in the application code and surrounding artifacts (example, DDLs, COPYBOOKS, user training manuals) that is useful as a source for the re-architecting process, such as application process interaction, data models, and workfl ow. Re-architecting will usually go outside the source code of the legacy application to incorporate concepts like workfl ow and new functionality that were never part of the legacy application. However, it also recognized that this legacy application contains key business rules and processes that need to be harvested and brought forward.

Some of the important considerations for maximizing re-use by extracting business rules from legacy applications as part of a re-architecture project include:

  • Eliminate dead code, environmental specifi cs, resolve mutually exclusive logic.
  • Identify key input/output data (parameters, screen input, DB and fi le records, and so on).
  • Keep in mind many rules outside of code (for example, screen fl ow described in a training manual.
  • Populate a data dictionary specifi c to application/industry context.
  • Identify and tag rules based on transaction types and key data, policy parameters, key results (output data).
  • Isolate rules into tracking repository.
  • Combine automation and human review to track relationships, eliminate redundancies, classify and consolidate, add annotation.

A parallel method of extracting knowledge from legacy applications uses modeling techniques, often based on UML. This method attempts to mine UML artifacts from the application code and related materials, and then create full-fl edged models representing the complete application. Key considerations for mining models include:

  • Convenient code representation helps to quickly fi lter out technical details.
  • Allow user-selected artifacts to be quickly represented in UML entities.
  • Allow user to add relationships and annotate the objects to assemble more complete UML model.
  • Use external information if possible to refi ne use cases (screen fl ows) and activity diagrams—remember that some actors, fl ows, and so on may not appear in the code.
  • Export to XML-based standard notation to facilitate refi nement and forward-re-engineering through UML-based tools.

Modernization with this method leverages the years of investment in the legacy code base, it is much less costly and less risky than starting a new application from ground zero. However, since it does involve change, it does have its risks. As a result, a number of other modernization options have been developed that involve less risk. The next set of modernization option provide a different set of benefi ts with respect to a fully re-architected SOA environment. The important thing is that these other techniques allow an organization to break the process of reaching the optimal modernization target into a series of phases that lower the overall risk of modernization for an organization.

In th e following fi gure, we can see that re-architecture takes a monolithic legacy system and applies technology and process to deliver a highly adaptable modern architecture.

SOA Integration

Since S OA integration is the least invasive approach to legacy application modernization, this technique allows legacy components to be used as part of an SOA infrastructure very quickly and with little risk. Further, it is often the fi rst step in the larger modernization process. In this method, the source code remains mostly unchanged (we will talk more about that later) and the application is wrapped using SOA components, thus creating services that can be exposed and registered to an SOA management facility on a new platform, but are implemented via the exiting legacy code. The exposed services can then be re-used and combined with the results of other more invasive modernization techniques such as re-architecting. Using SOA integration, an organization can begin to make use of SOA concepts, including the orchestration of services into business processes, leaving the legacy application intact.

Of course, the appropriate interfaces into the legacy application must exist and the code behind these interfaces must perform useful functions in a manner that can be packaged as services. SOA readiness assessment involves analysis of service granularity, exception handling, transaction integrity and reliability requirements, considerations of response time, message sizes, and scalability, issues of end-to-end messaging security, and requirements for services orchestration and SLA management. Following an assessment, any issues discovered need to be rectifi ed before exposing components as services, and appropriate run-time and lifecycle governance policies created and implemented.

It is important to note that there are three tiers where integration can be done: Data, Screen, and Code. So, each of the tiers, based upon the state and structure of the code, can be extended with this technique. As mentioned before, this is often the fi rst step in modernization.

In this example, we can see that the legacy systems still stay on the legacy platform. Here, we isolate and expose this information as a business service using legacy adapters.

The table below lists important considerations in SOA integration and enablement projects.

Criteria for identifying well defi ned services Services integration and orchestration
Represent a core enterprise function re-usable by many client applications Wrapping and proxying via middle-tier gate-way vs. mainframe-based services
Present a coarse-grained interface Who's responsible for input validation?
Single interaction vs. multi-screen fl ows Orchestrating "composite" MF services
UI, business logic, data access layers Supporting bidirectional integration
Exception handling—returning results without tranching to another screen Quality of Service (QoS) requirements
Discovering "Services" beyond screen fl ows Response time, throughput, scalability
Conversational vs. sync/async calls End-to-end monitoring and SLA management
COMMAREA transactions (re-factored to use reasonable message size) Transaction integrity and global transaction coordination
Security policies and their enforcement End-to-end monitoring and tracing
RACF vs. LDAP-based or SSO mechanism Services lifecycle governance
End-to-end messaging secruity and Authentication, Authorization, Audition Ownership of service interfaces and change control process
Service discovery (respository, tools)
Orchestration, extension
BPM integration

Platform Migration

This are a encompasses a few different approaches. They all share a common theme of low risk, predictable migration to an open system platform with a high level of automation to manage this process. With platform migrations, the focus is moving from one technology base to another as fast as possible and with as little change as possible. In Chapter 10, Introduction to Re-hosting Based Modernization using Oracle Tuxedo, we will focus on moving from mainframe platforms to open systems through a combination of re-hosting applications to a compatible environment maintaining the original application language (usually COBOL), and automated migration of applications to a different language when necessary. Each uses a high level of automation and a relative low level of human interaction as compared to other forms of modernization. The best re-platforming tools in the market are rules-based, and can also support automated changes to business logic or data access code when required to address specifi c business needs through specifi cally confi gured rule sets.

Automated Migration

Automate d migration is a technique in which software tools are used to translate one language or database technology to another. It is typically used to protect the investment in business logic and data in cases where the source environment is not readily available or supportable (example skills are rare) on the target platform. Such migrations are only considered automated if the scope of conversion handled by the tools is at least 80 percent. Automated migration is very fast and provides a one-to-one functionally equivalent application. However, the degree of the quality of target code is heavily dependent upon what the source is.

There are two primary factors which determine how good the target application is. The fi rst factor being, what is the source paradigm? If you are coming from a procedure-based programming model such as COBOL, then the resulting Java will not be a well-structured object-oriented code. Many vendors will claim pure OO, or 100 percent compliant Java. But in reality, OO languages programs can still be used in a procedural fashion. When the source is a step-by-step COBOL application, then that is what you will end up with after your migration to Java. This solution works quite well when the paradigm shift is not large. For example, going from PL/I to C/C++ is much more attainable with this strategy than converting COBOLto Java. This strategy is often used to migrate from 4GLs, such as Natural or CA Gen (formerly COOL:Gen) to COBOL or Java. Of the two target environments, migration to Java is more complex and typically requires additional manual re-factoring to produce proper OO POJO components or J2EE EJBs that can be easily maintained in the future.

The second factor one needs to consider is the quality of the source. Some re-factoring can be done on the source language, or the meta-language often generated in the transformation. But these usually only address things such as dead code or GOTO statements, not years of spaghetti code.

If your goal is to quickly move from one technology to another, with functional equivalence, then this is a great solution. If the goal is to make major changes to the architecture and take full advantage of the target language, then this type of method usually does not work.


Re-hostin g involves moving an application to another hardware platform using a compatible software stack (example COBOL containers and compatible OLTP functionality provided by Oracle Tuxedo) so as to leave the source application untouched. This is most commonly used approach to migrate mainframe COBOL CICS to an open systems platform and has been used in hundreds of projects, some as large as 12,000 MIPS.

The fundamental strength of rehosting is that the code base does not change and thus there are no changes to the core application. There are some adaptations involved for certain interfaces, batch jobs, and non-COBOL artifacts that are not inherently native to the target environment. These are usually handled through automated migration. The beauty of this solution is that the target environment using open systems platform, typically UNIX or Linux, has a signifi cantly lower TCO than the original mainframe environment, allowing customers to save 50 to 80 percent compared to their mainframe operations. The budget savings gained from this move can fund more long term, yet benefi cial re-architecture effort.

Re-Hosting Based Modernization

Evolving from the core re-hosting approach and leveraging fl exible, rules-driven automated conversion tools, this approach goes beyond re-hosting to a functionally-equivalent application. Instead of a pure shift of COBOL code to a target system without any changes to the original code, some of the automated tooling used by Oracle's migration partners to re-host applications and data also enables automated re-engineering and SOA integration during or following migration. For example, Metaware Refi ne workbench has been used to:

  • Automatically migrate COBOL CICS applications to COBOL Tuxedo applications.
  • Convert PL/I applications running under IMS TM to C/C++ applications under Tuxedo.
  • Identify and remove code duplication and dead code, re-documenting fl ows and dependencies derived from actual code analysis.
  • Migrate VSAM data and COBOL copybooks describing the data schema to Oracle DB DDLs and automatically change related data access code in the application.
  • Migrate DB2 to Oracle DB, making appropriate adjustments for data type differences, changing exception handling based on differences in return codes, and converting stored procedures from DB2 to Oracle.
  • Perform data cleansing, fi eld extensions, column merges and other data schema changes automatically synchronized across data and data access code.
  • Migrate non-relational data to Oracle DB to provide broader access from applications on distributed systems.
  • Convert 3270/BMS interface to Web UI using JSP/HTML, enabling modifi cations and fl ow optimization in original legacy UI.
  • Adapt batch to transactional environment to shorten batch windows.

APA tools for automated business rule discovery can also be used to help identify well defi ned business services and use Oracle Tuxedo's SOA framework to expose these COBOL services as fi rst class citizens of an enterprise SOA. This approach can also be applied to PL/I applications automatically migrated to C/C++ and hosted in Tuxedo containers. The bulk of the re-hosted code remains unchanged, but certain key service elements that represent valuable re-use opportunities are exposed as Web Services or ESB business services. This approach protects investment in the business logic of the legacy applications by enabling COBOL components to be extended to SOA using native Web Services gateway, ESB integration, MQ integration, and so on of the Oracle Tuxedo—a modern TP/Application Server platform for COBOL, C, and C++.

Thus, we gain a huge advantage by having a well structured, SOA-enabled architecture on a new platform that was delivered with a high degree of automation. Using a proven application platform with built-in SOA capabilities, including native Web Services support, ESB transport, transparent J2EE integration, and integration with meta-data repository for full services lifecycle governance, makes this a low-risk approach. It also helps to address some of the key considerations in SOA integration table above. With this approach we have the ability to extend and integrate the legacy environment easier than a pure re-host, while benefi tting from the automation that ensures high speed of delivery and low risk that is comparable to a black-box re-hosting.

The other aspect of this process is identifying components that will benefi t from re-architecture—usually code with low maintainability index or code requiring signifi cant changes to meet new business needs—and using re-architecture techniques to re-cast it as a new components, such as business process, declarative rules in a business engine, or re-coded J2EE components. The key is to ensure that the re-architected components remain transparently integrated with the bulk of the re-hosted code, so that the COBOL or C/C++ code out-side of the selected components doesn't have to be changed. With Oracle Tuxedo this is done via transparent bi-directional support for Web Services (using Oracle SALT) and J2EE integration (using WebLogic-Tuxedo Connector). The key guidelines listed for business rules-extraction and model mining apply to the components selected for re-architecture.

Re-hosting based modernization is sometimes referred to as Re-host++. This term highlights its roots in re-hosting applications to a compatible technology stack together with the broad range of re-engineering, SOA integration, and re-architecting options it enables. This unique methodology is supported by a combination of an extensible COBOL, C, and C++ application platform—Oracle Tuxedo, with fl exible, rules-driven automated conversion tools from Oracle's modernization partners.

Data Modernization 

Here we look at strategies to modernization—a set of data stores that are stored across disparate and heterogeneous sources. We often have problems with accessing and managing legacy data. There is an increase in cost to run batch jobs, which generate reports 24 to 48 hours after they are needed. Further, this legacy data often needs to be integrated with other database systems that are located on different platforms. So, from a business perspective, there is a real problem in getting actionable data in a reasonable amount of time, and at a low cost.

With Data Modernization solutions, we can look at leaving legacy data on the mainframe, pulling it out in near real time, lowering MIPS costs by processing reports outside of the batch winding, and integrating this with heterogeneous data sources. This is leveraged through employing several technologies in concert.

Write your comment - Share Knowledge and Experience

More Related links 

Fundamental ODP.NET Classes

To retrieve data from an Oracle database using ODP.NET, we need to work with a few of the ODP.NET classes.....

Oracle Web RowSet

In this chapter we will use the XML document representation of a result set generated with an SQL query to modify a relational database table.

What are background processes in oracle?

Oracle uses background process to increase performance : Database writer, DBWn, Log Writer, LGWR, Checkpoint, CKPT, System Monitor, SMON, Process Monitor, PMON, Archiver, ARCn...........

Oracle DBA

How do you increase the performance of %LIKE operator?, What is a standby database? Difference between Physical and logical standby databases, What is Cache Fusion Technology?, What is simple replication and advance replication?.............. 


Interview questions
Latest MCQs
» General awareness - Banking » ASP.NET » PL/SQL » Mechanical Engineering
» IAS Prelims GS » Java » Programming Language » Electrical Engineering
» English » C++ » Software Engineering » Electronic Engineering
» Quantitative Aptitude » Oracle » English » Finance
Home | About us | Sitemap | Contact us | We are hiring