WO2001026008A1 - Procede de surveillance d'evenements/defaillances et dispositif d'estimation associe - Google Patents
Procede de surveillance d'evenements/defaillances et dispositif d'estimation associe Download PDFInfo
- Publication number
- WO2001026008A1 WO2001026008A1 PCT/US2000/027629 US0027629W WO0126008A1 WO 2001026008 A1 WO2001026008 A1 WO 2001026008A1 US 0027629 W US0027629 W US 0027629W WO 0126008 A1 WO0126008 A1 WO 0126008A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- monitoring
- task
- infrastructure
- technology infrastructure
- designing
- Prior art date
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 242
- 238000000034 method Methods 0.000 title claims abstract description 157
- 238000012360 testing method Methods 0.000 claims abstract description 154
- 238000005516 engineering process Methods 0.000 claims abstract description 133
- 238000013461 design Methods 0.000 claims abstract description 95
- 230000008520 organization Effects 0.000 claims abstract description 56
- 230000008569 process Effects 0.000 claims abstract description 43
- 238000013439 planning Methods 0.000 claims abstract description 23
- 238000004458 analytical method Methods 0.000 claims abstract description 21
- 230000006870 function Effects 0.000 claims description 102
- 238000007726 management method Methods 0.000 claims description 50
- 230000003993 interaction Effects 0.000 claims description 39
- 238000013459 approach Methods 0.000 claims description 28
- 230000000694 effects Effects 0.000 claims description 27
- 239000012925 reference material Substances 0.000 claims description 8
- 238000003339 best practice Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 description 23
- 238000012384 transportation and delivery Methods 0.000 description 16
- 239000000463 material Substances 0.000 description 12
- 238000012552 review Methods 0.000 description 12
- 238000010200 validation analysis Methods 0.000 description 12
- 238000004519 manufacturing process Methods 0.000 description 10
- 238000011161 development Methods 0.000 description 8
- 230000018109 developmental process Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000013515 script Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000009434 installation Methods 0.000 description 4
- 238000012423 maintenance Methods 0.000 description 4
- 238000007670 refining Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 238000012502 risk assessment Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000012938 design process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000013102 re-test Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 210000003462 vein Anatomy 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 101000597577 Gluconacetobacter diazotrophicus (strain ATCC 49037 / DSM 5601 / CCUG 37298 / CIP 103539 / LMG 7603 / PAl5) Outer membrane protein Proteins 0.000 description 1
- 238000012356 Product development Methods 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 238000012508 change request Methods 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/80—Management or planning
Definitions
- IT Information Technology
- Such a framework needs to be a single framework describing an entire IT capability, whether as functions, systems or tasks.
- the IT framework should be a framework of functions, a representation of a complete checklist of all relevant activities performed in an IT enterprise.
- a single IT Framework should represent all functions operative in an IT enterprise.
- an event and fault management or monitoring function that is sent from system components such as hardware, application software and system software, and communications systems. Incidents could be interpreted as either faults (failures) or events (warnings).
- An event/fault monitoring function should coordinate with other function categories to provide input and should aim to continuously improve current IT services and offerings. Such a function is also known as an event and fault management.
- one embodiment of the invention is a method for providing for an event and fault monitoring function that receives, logs, classifies, analyzes and presents incidents based upon pre-established filters or thresholds.
- the method includes planning, designing, building, testing and deploying an event and fault monitoring function in an IT organization.
- the method preferably includes designing business processes, skills, and user interaction for the design phase.
- the method further includes designing an organization infrastructure and a performance enhancement n ras ruc ure or mon or ng. e me o a so nc u es es gn ng ec no ogy infrastructure and operations architecture for the design phase of monitoring.
- the building phase of the method the technology infrastructure and the operations architecture is built. Also business policies, procedures, performance support, and learning products for monitoring are built.
- the technology infrastructure and the operations architecture are tested.
- the deploying stage the technology infrastructure for the IT organization is deployed.
- Another aspect of the present invention is a method for providing an estimate for building an event/fault monitoring function in an information technology organization.
- This aspect of the present invention allows an IT consultant to give on site estimations to a client within minutes.
- the estimator produces a detailed break down of cost and time to complete a project by displaying the costs and time corresponding to each stage of a project along with each task.
- Another aspect of the present invention is a computer system for allocating time and computing cost for building an event/fault monitoring function in an information technology organization.
- Figure 1a is a representation of a network/systems management function including monitoring functions.
- Figure 1 b is a representation of an event/fault monitoring function including sub-elements of the function.
- Figure 2 shows a representation of a method for providing a monitoring function according to the presently preferred embodiment of the invention.
- Figure 3 shows a representation of a task for defining a business performance model for monitoring.
- Figure 4 shows a representation of a task for designing business processes, skills, and user interaction for monitoring.
- Figure 5 shows a representation of a task for designing technology infrastructure requirements for monitoring.
- Figure 6 shows a representation of a task for designing an organization infrastructure for monitoring.
- Figure 7 shows a representation of a task for designing a performance enhancement infrastructure for monitoring.
- Figure 8 shows a representation of a task for designing operations architecture for monitoring.
- Figure 9 shows a representation of a task for validating a technology infrastructure for monitoring.
- Figure 10 shows a representation of a task for acquiring a technology infrastructure for monitoring.
- Figure 1 1 shows a representation of a task for building and testing operations architecture for monitoring.
- Figure 12 shows a representation of a task for developing business policies, procedures, and performance support architecture for monitoring.
- Figure 13 shows a representation of a task for developing learning products for monitoring.
- Figure 14 shows a representation of a task for testing a technology infrastructure product for monitoring.
- Figure 15 shows a representation of a task for deploying a technology infrastructure for monitoring.
- Figure 16 shows a flow chart for obtaining an estimate of cost and time allocation for a project.
- Figures 17a and 17b show one embodiment of an estimating worksheet for an event/fault monitoring estimating guide.
- an information technology (“IT”) enterprise may be considered to be a business organization, charitable organization, government organization, etc., that uses an information technology system with or to support its activities.
- An IT organization is the group, associated systems and processes within the enterprise that are responsible for the management and delivery of information technology services to users in the enterprise.
- multiple functions may be organized and categorized to provide comprehensive service to the user.
- the various operations management functionalities within the IT framework include a customer service management function; a service integration function; a service delivery function, a capability development function; a change administration function; a strategy, architecture and planning function; a management and administration function; a human performance management function; and a governance and strategic relationships function.
- monitoring plays an important role.
- the present invention includes a method for providing a monitoring system or function for an information technology organization. Before describing the method for providing a monitoring function, a brief explanation is in order concerning event/fault monitoring, and its systems, functions and tasks.
- Event/fault Monitoring is a group of tasks or functions within a network or systems management function.
- Such a network/systems management function 31 is depicted in Figure 1 a, with several functions, including production scheduling 311 , output/print management 312, network/systems operations 313, operations architecture management 314, network addressing management 315, storage management 316, backup/restore management 317, archiving 318, and event/fault management 319.
- Other functions may include system performance management 3110, security management 3111 , and disaster recovery maintenance and testing 3112.
- the scope of Monitoring 319 includes four organizations, monitoring 3191 , analyzing 3192, and classifying
- Event/Fault Management A group of functions useful in information technology may be termed Event/Fault Management. These functions receive, log, classify, analyze, and present incidents based upon pre-established filters or thresholds. Incidents are interpreted as either faults (failures) or events (warnings). Event and fault information is sent from system components such as hardware, application/system software, and communications resources. Systems, groups, or function with event/fault management may include those for monitoring, analyzing, and classifying and displaying.
- Monitoring Requirements Management manages the requirements for new monitors and adjusts existing monitors.
- the requirements will typically identify resources and components that will be monitored and map the threshold levels into the event and fault categories.
- Incident classification classifies incidents to promote to an event, fault, or to ignore. This group assigns severity levels and assesses impact. Once the data is pulled in, the incident is defined or classified. A severity level, system impact, and notification are then determined.
- a fault is defined as a failure of a device or a critical component of that device.
- the groups correlate faults or events from multiple devices to assist in problem analysis if applicable.
- An event is defined as a tripping of a significant threshold or warning, and could be based on performance or indications of a potential failure of a device or critical component of that device.
- Part of the fault/event series of functions may be a traffic analysis group. This group identifies critical nodes that are representative of enterprise performance. These probes are then used to gather information on protocols and stations communicating on an enterprise segment. Event/fault trend reporting functions report on event/fault alerts over a time period. This function provides trending information on frequency of events/faults and potential sources of future problems and feedback into the adjustments of thresholds. Finally, under event/fault management, there is desirably a function for display management. This function maintains an effective and ergonomically correct view of the event and fault alerts presented to the operations staff.
- the method for providing Operations Management (“OM”) event/fault monitoring includes the tasks involved in building a particular OM function. These specific tasks are described in reference to the Operations Management Planning Chart ("OMPC") that is shown on Figure 2.
- OMPC Operations Management Planning Chart
- This chart provides a methodology for capability delivery, which includes tasks such as planning analysis, design, build & test, and deployment.
- Each OM function includes process, organization, and technology elements that are addressed throughout the description of the corresponding OM function.
- the method comprises four phases, as described below in connection with Figure 2.
- the first phase, "plan delivery" 102, or planning includes the step of defining a business performance model 2110.
- the second phase, design, 104 has a plurality of steps, including design of business processes, skills and user interactions 2410, design of organizational infrastructure 2710, design of performance enhancement infrastructure 2750, analyze technology infrastructure requirements 3510, select and design operations architecture 3550, and validate technology infrastructure 3590.
- a third phase, build and test 106 has a second plurality of steps, acquire technology infrastructure 5510, build and test operations architecture 5550, develop policies, procedures and performance support 6220, develop learning products 6260 and prepare and execute technology infrastructure product tests 5590.
- the fourth phase 108 includes the step of deploying 7170. In the following description, the details of the tasks within each step are discussed.
- Monitoring delivery and deployment focuses improving business capability.
- One such improvement may be to upgrade the monitoring capability of an information technology system within an enterprise.
- One of the key steps in defining business and performance requirements is identifying all of the types of support and levels of support that end users and other stakeholders should receive from monitoring. While monitoring personnel may be responsible for performing other OM functions in the organization, this set of task packages is limited to analysis of functions which are nearly always associated with monitoring. They include monitoring, classifying, analyzing and displaying. Step 2110 - Refine Business Performance Model
- step 2110 the business model requirements for monitoring are defined, and the scope of the delivery and deployment effort for any upgraded capability is determined.
- Figure 3 shows a representation of the tasks for carrying out these functions according to the presently preferred embodiment of the invention.
- Figure 3 is a more detailed look at the business performance model 2110, which may include the functions of confirming business architecture 2111 , analyzing operating constraints 2113, analyzing current business capabilities 2115, identifying best operating practices 2117, refine business capability requirements 2118, and updating the business performance model 2119.
- Task 2111 includes assessing the current business architecture, confirms the goals and objectives, and refines the components of the business architecture. Preferably, the task includes reviewing the planning stage documentation, confirming or refining the overall monitoring architecture, and ensuring management commitment to the project. The amount of analysis performed in this task depends on the work previously performed in the planning phase of the project. Process, technology, organization, and performance issues are included in the analysis. As part of a business integration project, monitoring delivery and deployment focuses on enhancing a business capability, whereas an enterprise-wide monitoring deployment requires analysis of multiple applications rather than a single business capability. Monitoring covers the functions of event management, fault management, and system performance management. Monitoring terminology can mean different things in different organizations.
- Terminology to be defined includes, but is not limited to, organizational groups responsible for the monitoring process, and severity levels, e.g., "fatal”, “critical”, “minor” and “warning”.
- Task 2113 Analyze Operating Constraints
- Task 2113 includes identifying the operating constraints and limitations, and assessing their potential impact on the operations environment.
- the task includes assessing the organization's strategy and culture and its potential impact on the project, and assesses organization, technology, process, equipment, and facilities for the constraints.
- the task includes assessing the organization's ability to adapt to changes as part of the constraints analysis. It is desirable to identify scheduled maintenance times for servers, network devices, and other infrastructure equipment.
- Analyzing the current monitoring capability 2115 is the next task in the process.
- One way to accomplish this is to document current activities and procedures to establish a performance baseline, if there is an existing system.
- An estimator may also assess strengths and weaknesses of any existing Monitoring capability in order to better plan and design for the future.
- Important considerations include understanding the Monitoring processes before looking into how they are currently measured. Another important consideration is to perform this task to the level of detail needed to understand the degree of change required to move to a new monitoring capability.
- Task 2117 Identify Monitoring Best Practices Task includes identifying the best operating practices 21 17 for the operation, and to identify the Monitoring areas that could benefit from application of best practices. In one embodiment, the user will research and identify the optimum best practices to meet the environment and objectives.
- Task 2118 Refine Monitoring Requirements Task next in the planning 102 may be to refine monitoring capability requirements 2118.
- Capability requirements define what the Monitoring infrastructure will do; capability performance requirements define how well it will operate.
- Monitoring requirements should be defined and requirements should be allocated across changes to human performance, business processes, and technology. The requirements should be defined with reference both to the performance and to monitoring interfaces with other OM components. The requirements should be developed by integrating operating constraints, current capabilities, and best practices information.
- Task 2119 Update Business Performance Model
- the last block in Figure 3 calls for updating the business performance model 2119. To accomplish this, it is necessary to understand the performance and operational objectives previously defined.
- the provider will align the metrics and target service levels with performance provisions for batch Monitoring and processing as outlined in service level agreements. Considerations may include a business performance model to define the overall design requirements for the Monitoring capability. It is advantageous to keep the metrics as simple and straightforward as possible and to consider the Monitoring infrastructure's suppliers and customers in defining the metrics.
- the step of designing 104 may proceed simultaneously along two or more tracks. One track focuses on the business aspects of the task, while the other focuses on technology.
- function block 2410 calls for designing business processes, skills and user interactions, while block 3510 calls for analyzing the technology and infrastructure requirements.
- step 2410 the business processes, skills, and user interaction are taken into account, as shown in Figure 4.
- the provider designs the new monitoring processes, and develops the framework and formats for monitoring.
- Figure 4 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention.
- One task 2411 is to design workflows, or to create the workflows diagrams and define the workloads for all monitoring activities.
- Other tasks include defining the physical environment interactions 2412, identifying skills requirements for performing monitoring tasks 2413, defining application interactions, that is, the human-computer interactions necessary to fulfill key monitoring activities 2415.
- Still other tasks include identifying performance support requirements 2416, developing a capability interaction model 2417, and verifying and validating business processes, skills and user interaction 2419.
- Task 241 1 Design Workflows for Processes, Activities and Tasks
- relationships are defined between core and supporting processes, activities, and tasks, and the metrics associated with the processes and activities are also defined. Considerations may include whether or not packaged software has already been selected for monitoring. If so, the business processes implied by that package or selection should be used. These should be the starting point for developing the process elements. Reporting requirements should be analyzed and documented in as much detail as possible.
- a next step is to define the physical environment interaction 2412.
- the objective of this function is to understand the implications of the monitoring processes on the physical environment; mainly this involves location, layout and equipment requirements.
- the provider will want to take into account a physical environment interaction model.
- Costing elements may include identifying the Workflow/ Physical environment interfaces, designing the facilities, layout and equipment required for monitoring, identifying distributed monitoring physical requirements, if any, as well as cen ra nee s. ons era ons may nc u e e n erac on a e nes e layout and co-location implications of the monitoring workflows and the physical environment.
- Monitoring processes and tools should be designed to interface with other processes, such as asset management, service control, and the like.
- the next task for a comprehensive look at the design is to identify skill requirements 2413.
- the goal is to identify the skill and behavior requirements for performing monitoring tasks.
- the deliverables from this task may include both a role interaction model and skills definition.
- a planner should identify critical tasks from the workflow designs, define the skills needed for the critical tasks and identify supporting skills needed and appropriate behavioral characteristics.
- the next task is to define application interactions 2415, or to identify the human-computer interactions necessary to fulfill key monitoring activities. This will most often involve identifying required monitoring features not supported by the monitoring software and defining the human-computer interactions needed to meet the requirements. It should be recognized that packaged software has a pre-defined application interaction. This task may only be performed for activities that are not supported by packaged software. All monitoring personnel will normally require familiarity with the tracking software in order to log incidents, track them while they are open, close them once ey are comp e e, orwar em o spec a s s as nee e , or rev ew an analyze incidents to identify underlying system problems.
- Task 2416 Identify Performance Support Requirements Identifying performance support requirements 2416 is the next task block for the planner.
- the planner will want to analyze the Monitoring processes and determine how to support human performance within these processes.
- the task is to analyze the critical performance factors for each Monitoring task and to select a mixture of training and support aids to maximize workforce performance in completing each task. These can include Monitoring policies and detailed procedures, on-line help screens of various kinds, checklists, etc. If the design process is a change from a present system, it is important to understand what has changed from the current processes, and use this to determine the support requirements.
- Task 2417 Develop Capability Interaction Model The next task is to develop a capability interaction model 2417.
- the provider will identify the relationships between the tasks in the workflow diagrams, the physical location, skills required, human-computer interactions and performance support needs.
- a provider will develop a capability interaction model by understanding the interactions within each process for physical environment, skills, application and pe ormance suppo , an un y ng ese mo e s. e goa s an n egra e interaction model that will integrate workflows, the physical environment model, role and skill definitions, the application interactions, and support requirements to develop the capability interaction models.
- the tasks should be mapped into a Swimlane diagram format to depict the interdependencies between the different elements.
- the workflow diagram may be visually divided into "swimlanes" each separated from neighboring lanes by vertical solid lines on both sides.
- Each lane represents responsibility for tasks which are part of the overall workflow, and may eventually be implemented by one or more support organizations.
- Each task is assigned to one swimlane.
- Such a model should illustrate how the process is performed, what roles fulfill the activities involved, and how the roles will be supported to maintain the monitoring capability.
- Task 2419 Verify and Validate Business Processes, Skills and User Interaction
- the final task of step 2410 is to verify and validate business processes, skills & user interaction 2419.
- a provider will want to verify and validate that the process designs and the Capability Interaction Models meet the monitoring requirements and are internally consistent.
- the end result is a business performance model that will help the design team and guide the project manager in both the technical and business aspects of the project.
- a provider will use stakeholders in the monitoring domain and outside experts as well as the design teams to do the validation.
- the provider will then verify and validate workflow diagrams in order to confirm that each process, activity, and task and its associated workflow fit together, and that the workflows meet the business capability requirements.
- Step 2710 Design Organization Infrastructure
- the method includes defining the structures for managing human performance, and defining what is expected of people who participate in the monitoring function, the required competencies, and how performance is managed.
- Figure 6 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention.
- Task 271 1 Design Roles, Jobs and Teams
- the task will include the design for the roles, jobs and teams. As an example, the design may wrestle with the issue of whether the monitoring function will be centralized, distributed, and decentralized. Not only will this affect the capital costs, but it may also help to determine the reporting relationships and to identify the performance measurement factors. Monitoring roles and jobs will typically be based on the breadth of functions assigned. The monitoring organization structure should be designed around all these business requirements.
- the next task 2713 may be to design a competency model.
- the designer can define the skills, knowledge, and behavior that people need to accomplish their roles in the monitoring process.
- the goal of this task is a Competency Model for Skill/Knowledge/Behavior, that is, to determine the characteristics required of the individuals/teams that will fill the roles.
- Sub-tasks or portions may include defining the individual capabilities necessary for success in these roles.
- the manager may then organize the capabilities along a proficiency scale and relate them to the jobs and teams. Attitude and personality are factors that will impact the performance of Monitoring personnel nearly as much as technical training and expertise.
- Task 2715 Design Performance Management Infrastructure These tasks define the people and teams that will perform in monitoring. The next task may be to design a performance management infrastructure 2715. The design here will define how individual performance will be measured, developed, and rewarded. There may be implications here on both the design and capital costs. The design here may also determine a performance management approach and appraisal criteria. The goal of the design effort may be to deliver a performance management infrastructure or design, and to develop standards for individuals and teams involved in the monitoring process. If management wishes also to identify a system to monitor the individuals' and teams' ability to perform up to the standards, the infrastructure to accomplish this is desirably included "in the ground floor," that is, when the system is designed and the cost is determined, rather than later.
- the next task of determining the organization mobilization approach may be necessary primarily if monitoring is a new function within an , organization, or of course, if the organization itself is new.
- the function must be staffed, or put another way, the organization must determine an infrastructure mobilization approach 2717. This is not normally a factor in capital costs, since personnel tend to be ongoing expenses. However, any peculiarities or changes from a "standard" design should be considered when costing a project or establishing a budget.
- the project manager may want to consider at some point how to mobilize the resources required to staff the new Monitoring capability. In staffing, the manager should identify profiles of the ideal candidates for each position, identify the sourcing approaches and timing requirements, and determine the selection and recruiting approaches.
- Task 2719 Verify and Validate Organization Infrastructure Once designed and costed, it may be prudent to verify and validate the organizational infrastructure 2719. The goal of this task is to verify and validate that the monitoring organization meets the needs of the monitoring capability and is internally consistent. A designer will want to confirm the organization with subject matter experts. The end result is that the designer will verify that the organization structure satisfies monitoring capability requirements.
- Step 2750 Design Performance Enhancement Infrastructure: In this step, a performance enhancement infrastructure is designed.
- Figure 7 shows a representation of the tasks for carrying out these functions, according to the presently preferred embodiment of the invention. Tasks include employee assessment 2751 , any performance enhancement needs 2753, investigation into performance enhancement products 2755, and verification and validation of the performance 2759.
- Task 2751 Assess Employee Competency and Performance.
- This task is to refine the information about the current monitoring staffs competency, proficiency, and performance levels in specific areas, and assess the gaps in competencies and performance levels that drive the design of the performance enhancement infrastructure.
- the task includes assessing the competency of the current monitoring staff based on the competency model previously developed.
- This task is to assess the performance support and training requirements necessary to close the competency and performance gaps in the workforce.
- the task includes using the employee assessment to determine the type of performance enhancement required to close the gaps and reach the desired competency levels.
- This task includes defining the number and structure of performance support and learning products.
- the designer determines the delivery approaches for training and performance support, designs the learning and performance support products, and defines the support systems for delivering training and performance support.
- Typical training and performance support design issues will revolve around the software tools to be used and the associated procedures for analysis, notification triggering, escalation and resolution.
- the most economical plan for software training will normally be vendor-supplied materials and instruction.
- the scope of procedural training will be dependent on the requirements and activities set up for the monitoring function in the prior analysis and design tasks.
- This task includes developing a comprehensive approach for testing the learning products with respect to achieving each product's learning objectives.
- the task includes identifying which learning objectives to be tested, and identifying the data capture methods to be used to test those objectives.
- the next step in a design may be to define a learning test approach 2757.
- the objective is to develop a comprehensive approach for testing the learning products with respect to achieving each product's learning objectives.
- the testing process will include identification of which learning objectives will be es e an en cat on o t e a a cap ure me o s a w e use o es those objectives.
- One approach is to concentrate on learning objectives which focus on knowledge gain and relate directly to the Monitoring Performance Model and Employee Competency Model 2713.
- performance enhancement infrastructure is validated.
- the task includes verifying the performance enhancement infrastructure and the learning test deliverables to determine how well they fit together to support the new monitoring capability.
- the method simulates the processes and activities performed by the members of the monitoring team in order to identify performance enhancement weaknesses.
- the method identifies the problems and repeats the appropriate tasks necessary to address the problems.
- stakeholders and subject matter experts are included in this process.
- the first functional block 3510 is analyzing technology infrastructure requirements, and is shown in more detail in Figure 5.
- the task here is to prepare for the selection and design of the technology infrastructure and to establish preliminary plans for technology infrastructure product testing.
- the project deliverables here will include operations architecture component requirements, a physical model of the operations architecture, and a product es approac an p an. er unc ons s own n gure nc u e as s o analyzing technology infrastructure requirements 3511 , analyzing component requirements 3515, and planning their tests 3517.
- Task 3511 Prepare Technology Infrastructure Performance Model The first task block is to prepare a technology infrastructure performance model 3511. The goal here is simple: analyze the functional, technical, and performance requirements for the Monitoring infrastructure. In this task, the project manager or planner seeks to identify key performance parameters for Monitoring, and to establish baseline project estimates, setting measurable targets for the performance indicators. This phase of the project should also include developing functional and physical models, and a performance model as well.
- the focus here is on the technology, and the goal should be to resolve all open issues as soon as possible, whether in this step or the next (selection and design 3550). If the organization has already purchased a Monitoring package, this is a strong indicator for reuse. If the business capability requirements suggest a change to other software, a strong business case will be needed to support the recommendation.
- the next task 3513 is to analyze technology infrastructure component requirements. This portion of the project begins to get into hardware and software required, as the project manager analyzes and documents requirements for Monitoring components, and defines additional needs. Tasks to be accomplished include identifying any constraints imposed by the environment and refining functional, physical, and performance requirements developed in the models previously built. In order to insure a "fit" with other aspects of the enterprise, the manager or planner should also assess the interfaces to other system components to avoid redundancy and ensure consistency/ compatibility.
- the key component of monitoring components is the actual monitoring software itself. In cases where automated event monitoring and tracking is requ re , a pac age so ut on w most e y e use . ere are many different monitoring packages available, some of which can handle cross- platform use. Depending on the scope the monitoring requirements, one or more packaged tools may be considered.
- this task should be to assess the ability of the current monitoring infrastructure to support the new component requirements 3515.
- this task is simply a system analysis step, in which a project manager or planner will consider the components described above in 3513, and see whether they are consistent with the desired infrastructure.
- the steps should include identifying current standards for technology infrastructure, and noting current standards and any gap in the analysis or the capability. Details desired may include documenting and analyzing the current Monitoring technology environment. It is important to identify the areas where gaps exist between the current infrastructure and the new requirements.
- Managers and planners will ideally be aware of constraints and limitations, in order to avoid repeating or re-doing work, or using the wrong infrastructure or components in planning the monitoring function.
- the next step may be to plan a product test for the technology infrastructure 3517.
- the results of this task will provide the basis on which the product test will be performed as well as the environment in which it is run.
- the task includes defining the test objectives, scope, environment, test conditions, and expected results.
- Sub- tasks may include defining a product test approach, designing a product test plan, and generating a deployment plan. It is important to remember that monitoring is not an island, and that all elements of monitoring need to be implemented for this test.
- the product test is a test of the infrastructure, not just the monitoring technology components. Therefore, the organizational and process elements are within the scope of such the test.
- Step 3550 Select and Design Operations Architecture
- the manager will select and design the components 3551 required to support a high-level Monitoring architecture; include re-use 3552, packaged 3553, and custom components 3555. After selection and design, the architecture is validated 3557. This is the module where the manager designs monitoring and formulates component and assembly test approaches and plans 3558.
- Task 3551 Identify Operations Architecture Component Options A first task is to identify operations architecture component options 3551. It is important to identify specific component options that will be needed to support the production environment. Tools used in this task may include an
- the manager will be sure to identify all risks and gaps that exist in the current Monitoring environment, select components that will support the Monitoring architecture, and consider current software resources, packaged software and custom software alternatives during the selection process. If packaged software is part of the solution, the manager should submit RFPs to vendors for software products that meet basic requirements. Some packages can usually be eliminated quickly, based on such things as lack of fit with the operating system(s), server(s), or other operations architecture components already in place.
- Task 3552 Select Reuse Operations Architecture Components A potentially useful task in costing and designing a system is whether one can select reuse operations architecture components 3552. If existing architecture components can be reused without extensive hardware, or more importantly, software changes, it may be possible to save on purchase and installation expense. This step should finalize the component selection and may be done in tandem with the package and custom tasks. The manager should evaluate component reuse options, determine gaps where (typically) software will not satisfy requirements, and select any components for reuse.
- Packaged software will be the primary alternative for monitoring component requirements.
- the software should be selected based on how well the options fit the requirements, the level of vendor support and cooperation, and cost factors.
- Organizational biases for or against particular products or vendors may be issues to be addressed.
- site visits to other organizations using the so tware may be valuable in verifying vendor claims of functionality. It may also be helpful to have independent opinions concerning vendor support and cooperation.
- Packaged software 3553 may well be the primary alternative for Monitoring component requirements. The manager should make her or her selection is based on how well the options fit the requirements, the level of vendor support and cooperation, and cost factors. Organizational biases for or against particular products or vendors may be issues to be addressed. Site visits to other organizations using the software components are desirable to verify the vendors' claims of functionality and to obtain independent opinions about vendor support and cooperation.
- Task 3555 Design Custom Operations Architecture Components If custom-designed components 3555 are considered, then any custom components may have to be designed, rather than merely purchased. On the other hand, it may be possible to customize a reuse or packaged component.
- a manger should evaluate the time, cost, and risk associated with custom development. Areas in monitoring where custom design may be needed typically include the three situations. The first is the design of custom reports. The second is the scripting or parameterization needed to install the software. The last is the design of interfaces to other components to facilitate automated transfers of data or other communications. These may include, but are not limited to, network software, asset management software, application databases, e-mail software, pagers, and the like. This portion of the task may be reiterated as necessary until the manager is satisfied with the choices made.
- the next task may be to develop a high-level design for the architecture, or to design and validate an operations architecture 3557.
- This portion of the design is pr mar y concerned wit com n ng t e reuse , pac age an custom components into an integrated design and ensuring that the selected architecture meets the requirements for monitoring of the enterprise.
- One portion of the task may be to define the standards and procedures for component building and testing. The manager may even consider prototyping if there are any complex interfaces to other components of the operations architecture. The end result of this task is to finish with a design for monitoring, complete with standards and procedures.
- Task 3558 Develop Operations Architecture Component and Assembly Test Approach, and Plan
- a component and assembly test approach and plan 3558 is needed.
- the outputs may include separate plans for a test approach and plan for components, assemblies, and acceptance procedures.
- objectives, scope, metrics, a regression test approach, and risks associated with each test may include component testing for the components selected above, whether new or reused. These tests are tests of the monitoring software components only, not the process and organization elements.
- the manager validates the chosen technology infrastructure 3590, as shown in Figure 9.
- An analysis is undertaken of the monitoring design 3591 , the technology infrastructure is validated 3593, the infrastructure design is validated 3595, and the plans for deploying the system and its test approach are reviewed and revised as necessary 3597.
- the manager will verify that the Monitoring design is integrated, compatible, and consistent with the other components of the Technology Infrastructure Design, and meets the Business Performance Model and Business Capability Requirements.
- Task 3591 Review and Refine Technology Infrastructure Design
- a first sub-task may be to review and refine the technology infrastructure design 3591. This task is undertaken to ensure that the Monitoring infrastructure design is compatible with other elements of the technology infrastructure. The manager may want to ensure that the monitoring function is integrated and consistent with the other components of the technology infrastructure. It may also be prudent to develop an issue list, or "punch list” for design items that conflict with the infrastructure or items that dont meet performance goals or requirements. This "punch list” may be subsequently used to refine the Monitoring infrastructure if needed.
- the next step in the design process may be to establish a technology infrastructure validation environment 3593.
- the manager designs, builds, and implements the validation environment for the technology infrastructure, and may deliver a validation schedule.
- Specific tasks may include establishing the environment, that is, the timing, and selecting and training participants. It may be valuable in the validation task to include designers and architects of OM components that will interface with monitoring in the evaluation. as : a a e ec no ogy n ras ruc ure es gn Having established the environment, the next task is to validate the technology infrastructure design 3595.
- the manager at this point will desirably identify gaps between the design and the technology infrastructure requirements defined earlier. Projects will proceed smoothly if the manager will record issues as they arise during this phase for corrective action. The manager should also, during this phase, identify and resolve any remaining gaps between the design and the expectation or the required service.
- Part of the process is to iterate through the validation until all critical issues have been resolved and to develop action plans for less critical issues.
- Monitoring is being installed as part of a larger business capability, this phase may serve as a checkpoint to verify that the most current requirements from the business capability release are being considered. Monitoring may be only one component of the infrastructure being tested at this point. Monitoring will typically be deployed in a single release. A manager may want to confirm that this is still appropriate by validating the monitoring interfaces to other elements of the technology infrastructure.
- the final task sub-block in the task of validating the technology infrastructure is to analyze the impact of the system and to revise plans 3597 as necessary. Tasks to be accomplished during this phase include analyzing the extent and scope of the work required for modifications and enhancements, analyzing the impact of validation outcomes on costs and benefits, and refining the plans for deployment testing. The result of this task should be a deployment plan, a test approach, a test plan and an infrastructure design.
- the point of this task is to update the appropriate technology infrastructure delivery plans based on the outcome of the validation process. Since the point of the information technology group is to service an enterprise, monitoring itself may only be part of the validation scope. Confirm also that a single release is appropriate. er es gn ng e even au mon or ng unc on an o a n ng authorization for build and test 112, the project may proceed along three timelines in the build and test portion 106 of Figure 2. One time-line continues in the technical vein, that is, acquiring the technology infrastructure 5510 and building and testing the selected operations architecture 5550. At the same time, other groups or personnel may develop learning products 6260 and other groups or personnel may develop policies, procedures and performance support 6220 for the new system. With these tasks completed, the project manager will proceed to prepare and execute a test of the new system, that is, a technology infrastructure product test 5590. With these tasks completed, all that remains is to deploy the new system 7170.
- Acquiring the technology infrastructure 5510, Figure 10 is the first step in build and test 106.
- Tasks forming a part of this block include planning and executing the acquisition of components 5511 , which suppliers will supply the components and services 5513, and how they will be supplied.
- This task package is primarily required if new packaged software is to be procured and installed as part of the project.
- the economic impact or implications are evaluated 5515, and the organization prepares and executes acceptance tests 5517 for the new components.
- the first task may be to initiate acquisition of the technology infrastructure components, primarily packaged software 5511.
- a "normal" procurement plan will suffice, so long as it includes RFP/RFQ documentation, defined vendor selection criteria, selecting from among the offering vendors, and so on.
- the process is smoothed if component capability and performance requirements are clearly defined in the documentation provided to vendors. as : e ec an ppo n en ors
- the next task may include selecting and appointing vendors 5513.
- the task may include evaluation of the several product offerings, negotiating contracts, and arranging for delivery and timing of delivery. It may be desirable if software training is negotiated as part of the contractual agreement. If multiple components and multiple vendors are involved, the project manager may find it advantageous to have delivery and installation of the components occur simultaneously so that the component interfaces can be tested with vendor representatives on site.
- the next task is to determine the impact and deployment implications of the software and vendor selection 5515 on the project economics and the enterprise served.
- the manager at this point may wish to compare procurement costs with project estimates, and assess the impact on the business situation. Revisions should be made and any approvals needed should be obtained. The manager should ensure that the economics of the transaction(s) are consistent with plan documentation, or changed as appropriate.
- Task 5517 Prepare and Execute Acceptance Test of Technology Architecture Components.
- the next task is to prepare and execute an acceptance test of the new components 5517.
- steps are taken to ensure that the Monitoring packaged components meet the technology infrastructure requirements. Personnel in this step build the test scripts, the test drivers, the input data, and the output data to complete the Technology
- a build and test stage 5550 depicted in Figure 11.
- personnel design and program the Monitoring components This is also the time to perform componen an assem y es ng. a or as s may nc u e e a e es gn o the operations architecture 5551 , assembly test plan 5552, building of the system 5553, component tests 5555, and assembly and test 5557.
- Task 5551 Perform Operations Architecture Detailed Design
- Detailed design should include the preparation of program specifications for custom and customized components. This task also desirably includes a design of the packaged software configuration, and detailed design reviews. Consideration should include custom components with interfaces to other OM components and any special reporting requirements for monitoring. Event correlation is one of the more sophisticated mechanisms for event management user interaction. While sophisticated, the correlation base rule can be complicated to doe and difficult to maintain. Special attention should be paid to this phase of the design.
- Task 5552 Revise Operations Architecture Component and Assembly, Test Approach, and Plan
- this task shows the need for any revisions, they should be accomplished when personnel revise the operations architecture component assembly test approach and plan 5552.
- This task includes updating the monitoring test plans to reflect the components' detailed design, and defining revised considerations or changes to the requirements.
- the task includes reviewing the test approaches and plans, and revising as needed for new or updated requirements. If other OM components interface with monitoring software, these interfaces should be tested, either in this task or in the product test task.
- the project may then proceed to building the components 5553.
- personnel will build (or program) all custom monitoring components and extensions to packaged or reuse components.
- Some packages may have unique or proprietary languages for customizing or configuring. If so, there may be a need for training.
- This task includes building all custom monitoring components and extensions to packaged or reuse components.
- t e as nc u es u ng e cus om componen s, u ng e cus om ze extensions to package or reuse components, and configuring the packaged components.
- Task 5555 Prepare and Execute Component Test of Custom Operations Components
- the next task is to prepare and execute tests of the custom operations components 5555. This testing will ensure that each custom Monitoring component and each customized component meets its requirements.
- the manager verifies the component test model, sets up the test environment, executes the test, and makes component fixes and retests as required. Tests should confirm component performance as well as their functionality. System performance should not be compromised by the amount of customization. The tests are not limited to this stage, but may proceed in subsequent testing tasks.
- Task 5557 Prepare and Execute Operations Assembly Test Following component tests, the project engineer or manager then proceeds to prepare and execute an operations assembly test 5557.
- a full test is performed of all interactions between Monitoring components.
- Personnel verify the assembly test model, set up a test environment, execute the test, and make fixes and retest as needed, again in an iterative fashion.
- Shell programs or stub programs may be needed to perform the assembly test. If shell programs are used, it is important to test not only successful completion, but to build in the error conditions which would cause abnormal endings or problems.
- personnel verify that all interfaces to other components are tested and operate correctly for successful, predictable outcomes and error conditions. This completes the build and test stage.
- Step 6220 Develop Policies, Procedures and Performance Support: Having completed the technical aspects, the project manager now considers some longer-term portions of the project, the policies, procedures and performance support detailed design 6220, as shown in Figure 12, needed for ongoing operation of the service. The purpose of this task is to produce a finalized, detailed set of new Monitoring policies, procedures, and reference materials. It is also desirable to conduct a usability test and review to verify ease of use with both monitoring personnel and personnel from the supported enterprise. Upon successful completion of this task, the operating personnel will have Monitoring Policies & Procedures and may also have any performance support products that may be necessary or useful. Subtasks include writing or performing the policies and procedures 6221 , developing business policies and procedures 6223, user procedures 6225, reference materials and job aids 6227, and validating and testing 6229.
- Task 6221 Perform Policies, Procedures and Performance Support Detailed Design
- Subtasks include writing or performing the policies and procedures, and a detailed design for performance support 6221.
- This task includes providing the product structure for all the new Monitoring policies, procedures, reference materials, and job aids. It may also be desirable to provide templates for each product, and to create prototype products with reference to the overall project.
- Task 6223 Develop Business Policies and Procedures It may also be necessary or desirable to develop a set of business policies and procedures 6223 for the operation. This is typically a rule set governing workflows and priorities.
- Business policies in this context describe the business rules governing workflows.
- Business procedures describe the sequential sets of tasks to follow based on the policies. Specific tasks within this task include collecting and reviewing content information, drafting policies and procedures, and planning for the production of the materials. Procedures should generally be organized into three main elements of monitoring, that is, event management, fault management, and system performance management. In developing these materials, this three- way organization is most appropriate where different people or groups will have primary responsibility for each element.
- Task 6225 Develop User Procedures
- a detailed set of monitoring user procedures are delivered.
- User procedures provide the details necessary to enable smooth execution of new tasks within a given business procedure.
- the provider collects and reviews content information, drafts the procedures, verifies consistency with business policies and procedures, and plans for the production of the materials. Outside personnel who interface with the monitoring process will generally do so on a very infrequent basis. They cannot be expected to review a procedure manual each time there is a need to interact.
- Task 6227 Develop Reference Materials and Job Aids Along with policies and procedures, it may be useful to develop reference materials and job aids for monitoring personnel 6227.
- the provider drafts any reference materials and job aids that make a task easier or more efficient are prepared.
- the provider should collect and review content information, draft the performance support products, verify consistency of the material with policies and procedures, and then plan for the production of materials.
- Performance support materials will be very desirable in environments where monitoring is a decentralized function performed by multiple groups across the organization. Such materials will help provide consistency in the handling of problem situations.
- Task 6229 Validate and Test Policies, Procedures and Performance
- the project manager may now want to test and validate 6229 them. This task will confirm that the products meet the requirements of the Monitoring capability and the needs of the personnel who will use them. It is also useful as a follow-up tool to resolve open issues.
- a desirable step may include development of learning products 6260, as shown in Figure 13.
- a first task may include defining the needs for learning products and the environment in which they are to be used 6261.
- Technical training in Monitoring software components may come from the package vendor or a third party training organization. Procedural training for an organization's procedures is often custom built or tailored for the situation.
- the next tasks are to perform a learning program detailed design 6263 and to make prototypes 6265. Using the prototypes, actual learning products may then be made, and produced 6267.
- the products should be tested 6269. Testing may take place later in the cycle, as depicted in Figure 13, or earlier, using prototypes, to achieve feedback and ensure the effort is on track and useful to the students or trainees.
- Task 6261 Develop Learning Product Standards and Development Environment
- the environment for developing the monitoring learning products is developed.
- the provider selects authoring and development tools, defines standards, and designs templates and procedures for product development.
- Technical training in monitoring software components may come from the package vendor or a third party training organization. Procedural training is custom built.
- Task 6263 Perform Learning Program Detailed Design
- the provider specifies how each learning product identified in the learning product design is developed.
- the task includes defining learning objectives and context, designing the learning activities, and preparing the test plan. Learning objectives and their context should be defined in preparation for designing the learning activities and preparing a test plan. It may be helpful to modularize the products by separating the monitoring activities into separate learning products.
- the monitoring software is integrated into the learning program, and following the completion of software technical training.
- prototypes are completed and ease-of-use sessions on classroom-based learning components (i.e., activities, support system, instructor guide) are conducted.
- the task includes creating prototype components, and conducts and evaluates the prototype.
- Task 6267 Create Learning Products
- the learning materials proposed and prototyped during the design activities are developed.
- the provider develops activities, content, and evaluation and support materials required, develops a maintenance plan, trains instructors/facilitators, and arranges for production.
- This task includes testing each product with the intended audience to ensure that the product meets the stated learning objectives, that the instructors are effective, and that the learning product meets the overall learning objectives for monitoring.
- the task includes confirming the Test Plan, executing a learning test, and reviewing and making required modifications. If the target audience is small, this test serves as the formal training session for the group. Multiple sessions may be appropriate if responsibilities are split and all personnel are not responsible for knowing all activities.
- Step 5590 Prepare and Execute Technology Infrastructure Product Test: At this point, much of the project work has been completed, and the product is ready for testing in a realistic environment 5590 to insure it is ready for deployment. A series of tests is depicted in Figure 14. The test and its design or model are first prepared 5591 , with expected results. The test is then performed 5593, by executing the tests prepared earlier. The tests should simulate actual working conditions, including any related manuals, policies and procedures produced earlier. An objective of the test should be to notice any deficiencies and make changes as required. Following these tests, a deployment test should be executed 5595, to ensure that the monitoring infrastructure can be gainfully deployed within the enterprise or organization. If this test is successful, the last stage of testing may then be executed, the technology infrastructure configuration test 5597. This test will ensure that the performance of the Technology Infrastructure, including
- This task is to create the monitoring infrastructure test model.
- the provider creates the test data and expected results, creates the testing scripts for production, deployment, and configuration tests.
- the provider also conducts the monitoring training not yet completed, and reviews and approves the test model. If a complete business capability is being deployed, this is a comprehensive test with monitoring being one piece.
- the product test should occur in a production-ready environment and should include the hardware and software to be used in production. If monitoring is being implemented independently, then all or a portion of the production environment can be used as the "test" application.
- Task 5593 Execute Technology Infrastructure Product Test This task is to verify that the technology infrastructure successfully supports the requirements outlined in the business capability design stage.
- the provider executes the test scripts, verifies the results, and makes changes as required. It is helpful if the actual monitoring working conditions are used or simulated, including related manuals and procedures.
- Task 5595 Execute Technology Infrastructure Deployment Test
- the provider ensures that the new monitoring infrastructure is correctly deployed within the organization.
- the provider executes the test scripts, verifies the results, and makes changes as required.
- Deployment testing and configuration testing are usually minimal for monitoring, since it is a "behind-the-scenes" application, with limited visibility to the rest of the enterprise supported by the information technology organization.
- Task 5597 Execute Technology Infrastructure Configuration Test This task is to ensure that the performance of the technology infrastructure, including monitoring, is consistent with the technology infrastructure performance model after the infrastructure has been deployed.
- the provider executes the test scripts, verifies the results and makes changes as required, and updates the risk assessment.
- Deployment testing and configuration testing are usually minimal for monitoring, since it is a "behind-the-scenes" application, with limited visibility to the rest of the enterprise supported by the information technology organization.
- the monitoring infrastructure may be deployed online 7170, Figure 15.
- the tasks remaining include configuring the technology infrastructure 7171 to prepare for any new business capability components.
- the technology infrastructure may then be installed 7173.
- all documentation, performance support tools and training must be completed and in place prior to the deployment.
- a final task may be to verify the technology infrastructure
- the deployment unit's technology infrastructure is customized to prepare for the new business capability components.
- the task includes reviewing the customization requirements, performing the customization, and verifying the infrastructure configuration.
- Customizing the infrastructure is normally completed in task package 5550, u an es opera ons arc ec ure. s as w genera y e requ re the Monitoring capability is being deployed at more than one site (i.e., individual desktops or multiple servers). In these cases, variances in the existing configurations will determine any customization required.
- the technology infrastructure for monitoring is installed.
- the task includes preparing installation environment, installing monitoring infrastructure, and verifying the installation.
- the documentation, performance support and training tools are completed and put in place prior to the deployment.
- the new technology infrastructure environment is verified and the issues raised as a result of the testing are addressed.
- the task includes performing the infrastructure verification, making changes as required, and notifying stakeholders.
- a follow-up audit is recommended after some period of production operations to confirm the validity and accuracy of service reports. This task should require minimal effort if monitoring is being installed independently.
- the present invention also includes a method and apparatus for providing an estimate for building a monitoring function in an information technology system.
- the method and apparatus generate a preliminary work estimate (time by task) and financial estimate (dollars by classification) based on input of a set of estimating factors that identify the scope and difficulty of key aspects to the system.
- Fig. 16 is a flow chart of one embodiment a method for providing an estimate of the time and cost to build a monitoring function in an information technology system.
- a provider of monitoring functions such as an IT consultant, for example, Andersen Consulting, obtains estimating factors from the client 202. This is a combined effort with the provider adding expertise and knowledge to help in determining the quantity and difficulty of each factor.
- Estimating factors represent key business drivers for a given operations management OM function. Table 1 lists and defines the factors to be considered along with examples of a quantity and difficulty rating for each factor.
- the computer program is a spreadsheet, such as EXCEL, by Microsoft Corp. of Redmond, Washington, USA.
- the consultant and the client will continue determining the number and difficulty rating for each of the remaining estimating factors 206.
- this information is transferred to an assumption sheet 208, and the assumptions for each factor are defined.
- the assumption sheet 208 allows the consultant to enter in comments relating to each estimating factor, and to document the underlying reasoning for a specific estimating factor.
- an estimating worksheet is generated and reviewed 210 by the consultant, client, or both.
- An example of a worksheet is shown in FIGS. 17a and 17b.
- the default estimates of the time required for each task will populate the worksheet, with time estimates based on the number factors and difficulty rating previously assigned to the estimating factors that correspond to each task.
- the amount of time per task is based on a predetermined time per unit required for the estimating factor multiplied by a factor corresponding to the level of difficulty.
- Each task listed on the worksheet is described above in connection with details of the method for providing the monitoring function.
- the same numbers in the description of the method above correspond to the same steps, tasks, and task packages of activities shown on the worksheet of FIGS. 17a and 17b.
- the worksheet is reviewed 210 by the provider and the client for accuracy.
- Adjustments can be made to task level estimates by either returning to the factors sheet and adjusting the units 212 or by entering an override estimate in the 'Used' column 214 on the worksheet.
- This override may be used when the estimating factor produces a task estimate that is not appropriate for the task, for example, when a task is not required on a particular project.
- Figs. 17a and 17b these columns are designated as Partner - "Ptnr”, Manager - “Mgr”, Consultant - “Cnslt”, and Analyst - “Anlst”, respectively. These allocations are adjusted to meet project requirements and are typically based on experience with delivering various stages of a project. It should be noted that the staffing factors should add up to 1.
- the workplan contains the total time required in days per stage and per task required to complete the project. Tasks may be aggregated into a "task package" of subtasks or activities for convenience.
- a worksheet as shown in FIGS. 17a and 17b, may be used, also for convenience. This worksheet may be used to adjust tasks or times as desired, from the experience of the provider, the customer, or both.
- the total estimated payroll cost for the project will then be computed and displayed, generating final estimates.
- a determination of out-of- pocket expenses 222 may be applied to the final estimates to determine a final project cost 224.
- the provider will then review the final estimates with an internal functional expert 226.
- project management costs for managing the provider's work are included in the estimator. These are task dependant and usually run between 10 and 15% of the tasks being managed, depending on the level of difficulty. These management allocations may appear on the worksheet and work plan.
- the time allocations for planning and managing a project are typically broken down for each of a plurality of task packages where the task packages are planning project execution 920, organizing project resources 940, controlling project work 960, and completing project 960, as shown in FIG. 17a.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Operations Research (AREA)
- Marketing (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Game Theory and Decision Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU78666/00A AU7866600A (en) | 1999-10-06 | 2000-10-06 | Method and estimator for event/fault monitoring |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15825999P | 1999-10-06 | 1999-10-06 | |
US60/158,259 | 1999-10-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001026008A1 true WO2001026008A1 (fr) | 2001-04-12 |
Family
ID=22567316
Family Applications (12)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/027801 WO2001026011A1 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour la planification strategique de la gestion des operations |
PCT/US2000/027803 WO2001026013A1 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour gestion des niveaux de service |
PCT/US2000/027629 WO2001026008A1 (fr) | 1999-10-06 | 2000-10-06 | Procede de surveillance d'evenements/defaillances et dispositif d'estimation associe |
PCT/US2000/027857 WO2001025877A2 (fr) | 1999-10-06 | 2000-10-06 | Organisation de fonctions de technologie de l'information |
PCT/US2000/027795 WO2001025876A2 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour la modelisation et la planification de la capacite |
PCT/US2000/027856 WO2001025970A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur pour l'evaluation de modeles de maturite d'operations |
PCT/US2000/027518 WO2001026005A1 (fr) | 1999-10-06 | 2000-10-06 | Procede de determination du cout total de la propriete |
PCT/US2000/027592 WO2001026007A1 (fr) | 1999-10-06 | 2000-10-06 | Methode et estimateur destines a la planification antisinistre d'une affaire |
PCT/US2000/027796 WO2001026010A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur pour l'ordonnancement de la production |
PCT/US2000/027804 WO2001026014A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur destines a la mise en oeuvre d'une commande de services |
PCT/US2000/027593 WO2001026028A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur de gestion du changement |
PCT/US2000/027802 WO2001026012A1 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour la gestion des moyens de stockage |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/027801 WO2001026011A1 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour la planification strategique de la gestion des operations |
PCT/US2000/027803 WO2001026013A1 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour gestion des niveaux de service |
Family Applications After (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/027857 WO2001025877A2 (fr) | 1999-10-06 | 2000-10-06 | Organisation de fonctions de technologie de l'information |
PCT/US2000/027795 WO2001025876A2 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour la modelisation et la planification de la capacite |
PCT/US2000/027856 WO2001025970A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur pour l'evaluation de modeles de maturite d'operations |
PCT/US2000/027518 WO2001026005A1 (fr) | 1999-10-06 | 2000-10-06 | Procede de determination du cout total de la propriete |
PCT/US2000/027592 WO2001026007A1 (fr) | 1999-10-06 | 2000-10-06 | Methode et estimateur destines a la planification antisinistre d'une affaire |
PCT/US2000/027796 WO2001026010A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur pour l'ordonnancement de la production |
PCT/US2000/027804 WO2001026014A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur destines a la mise en oeuvre d'une commande de services |
PCT/US2000/027593 WO2001026028A1 (fr) | 1999-10-06 | 2000-10-06 | Procede et estimateur de gestion du changement |
PCT/US2000/027802 WO2001026012A1 (fr) | 1999-10-06 | 2000-10-06 | Technique et estimateur pour la gestion des moyens de stockage |
Country Status (4)
Country | Link |
---|---|
EP (2) | EP1226523A4 (fr) |
AU (12) | AU1193601A (fr) |
CA (1) | CA2386788A1 (fr) |
WO (12) | WO2001026011A1 (fr) |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2002256550A1 (en) * | 2000-12-11 | 2002-06-24 | Skill Development Associates Ltd | Integrated business management system |
US7035809B2 (en) * | 2001-12-07 | 2006-04-25 | Accenture Global Services Gmbh | Accelerated process improvement framework |
US7937281B2 (en) | 2001-12-07 | 2011-05-03 | Accenture Global Services Limited | Accelerated process improvement framework |
WO2004040409A2 (fr) | 2002-10-25 | 2004-05-13 | Science Applications International Corporation | Systeme et procede destines a determiner des capacites de niveau de performance selon des criteres de modeles predetermines |
DE10331207A1 (de) | 2003-07-10 | 2005-01-27 | Daimlerchrysler Ag | Verfahren und Vorrichtung zur Vorhersage einer Ausfall-Häufigkeit |
US8572003B2 (en) * | 2003-07-18 | 2013-10-29 | Sap Ag | Standardized computer system total cost of ownership assessments and benchmarking |
US8566147B2 (en) * | 2005-10-25 | 2013-10-22 | International Business Machines Corporation | Determining the progress of adoption and alignment of information technology capabilities and on-demand capabilities by an organization |
EP1808803A1 (fr) * | 2005-12-15 | 2007-07-18 | International Business Machines Corporation | Système et procédé de sélection automatique d'une ou plusieurs mesures pour la réalisation d'une évaluation CMMI |
US8457297B2 (en) | 2005-12-30 | 2013-06-04 | Aspect Software, Inc. | Distributing transactions among transaction processing systems |
US8355938B2 (en) | 2006-01-05 | 2013-01-15 | Wells Fargo Bank, N.A. | Capacity management index system and method |
US7523082B2 (en) * | 2006-05-08 | 2009-04-21 | Aspect Software Inc | Escalating online expert help |
WO2008105825A1 (fr) * | 2007-02-26 | 2008-09-04 | Unisys Corporation | Procédé pour des services fondés sur une technologie de diversification |
EP2210227A2 (fr) * | 2007-10-25 | 2010-07-28 | Markport Limited | Modification de l'infrastructure permettant la fourniture de services dans les réseaux de télécommunication |
US8326660B2 (en) | 2008-01-07 | 2012-12-04 | International Business Machines Corporation | Automated derivation of response time service level objectives |
US8320246B2 (en) * | 2009-02-19 | 2012-11-27 | Bridgewater Systems Corp. | Adaptive window size for network fair usage controls |
US8200188B2 (en) | 2009-02-20 | 2012-06-12 | Bridgewater Systems Corp. | System and method for adaptive fair usage controls in wireless networks |
US9203629B2 (en) | 2009-05-04 | 2015-12-01 | Bridgewater Systems Corp. | System and methods for user-centric mobile device-based data communications cost monitoring and control |
US8577329B2 (en) | 2009-05-04 | 2013-11-05 | Bridgewater Systems Corp. | System and methods for carrier-centric mobile device data communications cost monitoring and control |
US20110066476A1 (en) * | 2009-09-15 | 2011-03-17 | Joseph Fernard Lewis | Business management assessment and consulting assistance system and associated method |
US20110231229A1 (en) * | 2010-03-22 | 2011-09-22 | Computer Associates Think, Inc. | Hybrid Software Component and Service Catalog |
EP2633450A4 (fr) * | 2010-10-27 | 2017-10-11 | Hewlett-Packard Enterprise Development LP | Systèmes et procédés pour programmer des changements |
US8880960B1 (en) | 2012-05-09 | 2014-11-04 | Target Brands, Inc. | Business continuity planning tool |
WO2015126409A1 (fr) | 2014-02-21 | 2015-08-27 | Hewlett-Packard Development Company, L.P. | Migration de ressources en nuage |
US10148757B2 (en) | 2014-02-21 | 2018-12-04 | Hewlett Packard Enterprise Development Lp | Migrating cloud resources |
WO2015153988A1 (fr) * | 2014-04-03 | 2015-10-08 | Greater Brain Group, Inc. | Systèmes et procédés d'augmentation de capacité de systèmes d'entreprises ou d'autres entités par une évolution de maturité |
US9984044B2 (en) | 2014-11-16 | 2018-05-29 | International Business Machines Corporation | Predicting performance regression of a computer system with a complex queuing network model |
US10044786B2 (en) | 2014-11-16 | 2018-08-07 | International Business Machines Corporation | Predicting performance by analytically solving a queueing network model |
US10460272B2 (en) * | 2016-02-25 | 2019-10-29 | Accenture Global Solutions Limited | Client services reporting |
CN106682385B (zh) * | 2016-09-30 | 2020-02-11 | 广州英康唯尔互联网服务有限公司 | 健康信息交互系统 |
EP3782021A4 (fr) * | 2018-04-16 | 2022-01-05 | Ingram Micro, Inc. | Système et procédé d'appariement de flux de revenus dans une plateforme de courtage de services en nuage |
WO2019232434A1 (fr) * | 2018-06-01 | 2019-12-05 | Walmart Apollo, Llc | Système et procédé de modification de capacité pour de nouvelles installations |
US20190369590A1 (en) | 2018-06-01 | 2019-12-05 | Walmart Apollo, Llc | Automated slot adjustment tool |
US11483350B2 (en) | 2019-03-29 | 2022-10-25 | Amazon Technologies, Inc. | Intent-based governance service |
CN110096423A (zh) * | 2019-05-14 | 2019-08-06 | 深圳供电局有限公司 | 一种基于大数据分析的服务器存储容量分析预测方法 |
US11119877B2 (en) | 2019-09-16 | 2021-09-14 | Dell Products L.P. | Component life cycle test categorization and optimization |
MX2022005750A (es) * | 2019-11-11 | 2022-08-17 | Snapit Solutions Llc | Sistema para producir y entregar productos y servicios de tecnologia de la informacion. |
US11288150B2 (en) | 2019-11-18 | 2022-03-29 | Sungard Availability Services, Lp | Recovery maturity index (RMI)-based control of disaster recovery |
US20210160143A1 (en) | 2019-11-27 | 2021-05-27 | Vmware, Inc. | Information technology (it) toplogy solutions according to operational goals |
CN111753443B (zh) * | 2020-07-29 | 2024-10-08 | 哈尔滨工业大学 | 一种基于能力累积的武器装备联合试验设计方法 |
US11501237B2 (en) | 2020-08-04 | 2022-11-15 | International Business Machines Corporation | Optimized estimates for support characteristics for operational systems |
US11329896B1 (en) | 2021-02-11 | 2022-05-10 | Kyndryl, Inc. | Cognitive data protection and disaster recovery policy management |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5793632A (en) * | 1996-03-26 | 1998-08-11 | Lockheed Martin Corporation | Cost estimating system using parametric estimating and providing a split of labor and material costs |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4827423A (en) * | 1987-01-20 | 1989-05-02 | R. J. Reynolds Tobacco Company | Computer integrated manufacturing system |
JPH03111969A (ja) * | 1989-09-27 | 1991-05-13 | Hitachi Ltd | 計画作成支援方法 |
US5233513A (en) * | 1989-12-28 | 1993-08-03 | Doyle William P | Business modeling, software engineering and prototyping method and apparatus |
WO1993012488A1 (fr) * | 1991-12-13 | 1993-06-24 | White Leonard R | Systeme et procede de logiciel pour analyse de mesures |
US5701419A (en) * | 1992-03-06 | 1997-12-23 | Bell Atlantic Network Services, Inc. | Telecommunications service creation apparatus and method |
US5586021A (en) * | 1992-03-24 | 1996-12-17 | Texas Instruments Incorporated | Method and system for production planning |
US5646049A (en) * | 1992-03-27 | 1997-07-08 | Abbott Laboratories | Scheduling operation of an automated analytical system |
US5978811A (en) * | 1992-07-29 | 1999-11-02 | Texas Instruments Incorporated | Information repository system and method for modeling data |
US5630069A (en) * | 1993-01-15 | 1997-05-13 | Action Technologies, Inc. | Method and apparatus for creating workflow maps of business processes |
US5819270A (en) * | 1993-02-25 | 1998-10-06 | Massachusetts Institute Of Technology | Computer system for displaying representations of processes |
CA2118885C (fr) * | 1993-04-29 | 2005-05-24 | Conrad K. Teran | Systeme de commande de processus |
WO1994029804A1 (fr) * | 1993-06-16 | 1994-12-22 | Electronic Data Systems Corporation | Systeme de gestion des processus |
US5485574A (en) * | 1993-11-04 | 1996-01-16 | Microsoft Corporation | Operating system based performance monitoring of programs |
US5724262A (en) * | 1994-05-31 | 1998-03-03 | Paradyne Corporation | Method for measuring the usability of a system and for task analysis and re-engineering |
US5563951A (en) * | 1994-07-25 | 1996-10-08 | Interval Research Corporation | Audio interface garment and communication system for use therewith |
US5745880A (en) * | 1994-10-03 | 1998-04-28 | The Sabre Group, Inc. | System to predict optimum computer platform |
JP3315844B2 (ja) * | 1994-12-09 | 2002-08-19 | 株式会社東芝 | スケジューリング装置及びスケジューリング方法 |
JPH08320855A (ja) * | 1995-05-24 | 1996-12-03 | Hitachi Ltd | システム導入効果評価方法およびシステム |
EP0770967A3 (fr) * | 1995-10-26 | 1998-12-30 | Koninklijke Philips Electronics N.V. | Système d'aide de décision pour la gestion d'une chaíne de l'alimentation agile |
US5875431A (en) * | 1996-03-15 | 1999-02-23 | Heckman; Frank | Legal strategic analysis planning and evaluation control system and method |
US5960417A (en) * | 1996-03-19 | 1999-09-28 | Vanguard International Semiconductor Corporation | IC manufacturing costing control system and process |
US5960200A (en) * | 1996-05-03 | 1999-09-28 | I-Cube | System to transition an enterprise to a distributed infrastructure |
US5673382A (en) * | 1996-05-30 | 1997-09-30 | International Business Machines Corporation | Automated management of off-site storage volumes for disaster recovery |
US5864483A (en) * | 1996-08-01 | 1999-01-26 | Electronic Data Systems Corporation | Monitoring of service delivery or product manufacturing |
US5974395A (en) * | 1996-08-21 | 1999-10-26 | I2 Technologies, Inc. | System and method for extended enterprise planning across a supply chain |
US5930762A (en) * | 1996-09-24 | 1999-07-27 | Rco Software Limited | Computer aided risk management in multiple-parameter physical systems |
US6044354A (en) * | 1996-12-19 | 2000-03-28 | Sprint Communications Company, L.P. | Computer-based product planning system |
US5903478A (en) * | 1997-03-10 | 1999-05-11 | Ncr Corporation | Method for displaying an IT (Information Technology) architecture visual model in a symbol-based decision rational table |
US6028602A (en) * | 1997-05-30 | 2000-02-22 | Telefonaktiebolaget Lm Ericsson | Method for managing contents of a hierarchical data model |
US6106569A (en) * | 1997-08-14 | 2000-08-22 | International Business Machines Corporation | Method of developing a software system using object oriented technology |
US6092047A (en) * | 1997-10-07 | 2000-07-18 | Benefits Technologies, Inc. | Apparatus and method of composing a plan of flexible benefits |
US6131099A (en) * | 1997-11-03 | 2000-10-10 | Moore U.S.A. Inc. | Print and mail business recovery configuration method and system |
US6119097A (en) * | 1997-11-26 | 2000-09-12 | Executing The Numbers, Inc. | System and method for quantification of human performance factors |
US6157916A (en) * | 1998-06-17 | 2000-12-05 | The Hoffman Group | Method and apparatus to control the operating speed of a papermaking facility |
-
2000
- 2000-10-06 AU AU11936/01A patent/AU1193601A/en not_active Abandoned
- 2000-10-06 WO PCT/US2000/027801 patent/WO2001026011A1/fr active Application Filing
- 2000-10-06 AU AU78666/00A patent/AU7866600A/en not_active Abandoned
- 2000-10-06 WO PCT/US2000/027803 patent/WO2001026013A1/fr active Application Filing
- 2000-10-06 AU AU11938/01A patent/AU1193801A/en not_active Abandoned
- 2000-10-06 WO PCT/US2000/027629 patent/WO2001026008A1/fr active Application Filing
- 2000-10-06 WO PCT/US2000/027857 patent/WO2001025877A2/fr not_active Application Discontinuation
- 2000-10-06 AU AU80018/00A patent/AU8001800A/en not_active Abandoned
- 2000-10-06 WO PCT/US2000/027795 patent/WO2001025876A2/fr active Application Filing
- 2000-10-06 AU AU80017/00A patent/AU8001700A/en not_active Abandoned
- 2000-10-06 WO PCT/US2000/027856 patent/WO2001025970A1/fr active Search and Examination
- 2000-10-06 AU AU14317/01A patent/AU1431701A/en not_active Abandoned
- 2000-10-06 AU AU14318/01A patent/AU1431801A/en not_active Abandoned
- 2000-10-06 EP EP00973433A patent/EP1226523A4/fr not_active Withdrawn
- 2000-10-06 WO PCT/US2000/027518 patent/WO2001026005A1/fr active Application Filing
- 2000-10-06 WO PCT/US2000/027592 patent/WO2001026007A1/fr active Application Filing
- 2000-10-06 AU AU78618/00A patent/AU7861800A/en not_active Abandoned
- 2000-10-06 AU AU77566/00A patent/AU7756600A/en not_active Abandoned
- 2000-10-06 AU AU16539/01A patent/AU1653901A/en not_active Abandoned
- 2000-10-06 AU AU79960/00A patent/AU7996000A/en not_active Abandoned
- 2000-10-06 WO PCT/US2000/027796 patent/WO2001026010A1/fr active Application Filing
- 2000-10-06 AU AU79961/00A patent/AU7996100A/en not_active Abandoned
- 2000-10-06 WO PCT/US2000/027804 patent/WO2001026014A1/fr active Application Filing
- 2000-10-06 EP EP00979124A patent/EP1222510A4/fr not_active Withdrawn
- 2000-10-06 WO PCT/US2000/027593 patent/WO2001026028A1/fr active Application Filing
- 2000-10-06 WO PCT/US2000/027802 patent/WO2001026012A1/fr active Application Filing
- 2000-10-06 CA CA002386788A patent/CA2386788A1/fr not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5793632A (en) * | 1996-03-26 | 1998-08-11 | Lockheed Martin Corporation | Cost estimating system using parametric estimating and providing a split of labor and material costs |
Non-Patent Citations (5)
Title |
---|
BRIMSON J.A.: "Activity accounting: An activity-based costing approach", 1991, JOHN WILEY & SONS, INC., XP002936962 * |
DAVIS W.S. ET AL.: "The information system consultant's handbook: Systems, analysis and design", 1 December 1998, CRC PRESS, XP002936958 * |
KERZNER H. PHD: "Project management: A systems approach to planning, scheduling and controlling", 1995, XP002936961 * |
OSTERLE H. ET AL.: "Total information system management: A European approach", 1993, JOHN WILEY & SONS, LTD., XP002936959 * |
WARD J. ET AL.: "Strategic planning for information systems", 1996, JOHN WILEY & SONS, LTD., XP002936960 * |
Also Published As
Publication number | Publication date |
---|---|
AU8001800A (en) | 2001-05-10 |
CA2386788A1 (fr) | 2001-04-12 |
AU7756600A (en) | 2001-05-10 |
AU7996100A (en) | 2001-05-10 |
WO2001025876A2 (fr) | 2001-04-12 |
AU1653901A (en) | 2001-05-10 |
AU8001700A (en) | 2001-05-10 |
WO2001025970A8 (fr) | 2001-09-27 |
WO2001026028A8 (fr) | 2001-07-26 |
WO2001026013A1 (fr) | 2001-04-12 |
WO2001026010A1 (fr) | 2001-04-12 |
WO2001026005A1 (fr) | 2001-04-12 |
WO2001025876A3 (fr) | 2001-08-30 |
WO2001026014A1 (fr) | 2001-04-12 |
EP1226523A4 (fr) | 2003-02-19 |
EP1226523A1 (fr) | 2002-07-31 |
AU1193801A (en) | 2001-05-10 |
AU7996000A (en) | 2001-05-10 |
EP1222510A2 (fr) | 2002-07-17 |
EP1222510A4 (fr) | 2007-10-31 |
AU1431801A (en) | 2001-05-10 |
AU1193601A (en) | 2001-05-10 |
AU7866600A (en) | 2001-05-10 |
WO2001025970A1 (fr) | 2001-04-12 |
WO2001026028A1 (fr) | 2001-04-12 |
AU1431701A (en) | 2001-05-10 |
WO2001026011A1 (fr) | 2001-04-12 |
WO2001025877A3 (fr) | 2001-09-07 |
WO2001026007A1 (fr) | 2001-04-12 |
AU7861800A (en) | 2001-05-10 |
WO2001025877A2 (fr) | 2001-04-12 |
WO2001026012A1 (fr) | 2001-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2001026008A1 (fr) | Procede de surveillance d'evenements/defaillances et dispositif d'estimation associe | |
US6738736B1 (en) | Method and estimator for providing capacacity modeling and planning | |
US7035809B2 (en) | Accelerated process improvement framework | |
US7937281B2 (en) | Accelerated process improvement framework | |
US20050114829A1 (en) | Facilitating the process of designing and developing a project | |
US20160321583A1 (en) | Change navigation toolkit | |
Pilorget | Implementing IT processes | |
CISM | Managing software deliverables: a software development management methodology | |
Nejmeh et al. | The PERFECT approach to experience-based process evolution | |
Weed-Schertzer | The Authentic Service Progression (TASP) | |
Barracliffe et al. | Systems Development Life Cycle (SDLC) Methodology | |
Pilorget et al. | IT Portfolio and Project Management | |
Solin | IT-documentation framework for an Engineering and Service Company | |
Clapp et al. | A guide to conducting independent technical assessments | |
Ma | Assessing capability maturity tools for process management improvement: A case study | |
Singh | downloaded from the King’s Research Portal at https://kclpure. kcl. ac. uk/portal | |
Engelbrecht | Successfully Implementing a Manufacturing Execution Systems (MES) Solutions | |
Macholz | XP Project Management | |
Majchrzak et al. | EFFECTIVE INTEGRATION PLANNING TO SUPPORT AGILE MANUFACTURING, REENGINEERING, AND CONCURRENT ENGINEERING | |
Enterprise et al. | Request for Proposal (RFP) Proc Main | |
Hui | A two-tier adaptive approach to securing successful ERP implementation | |
von Holten | Developing a Quality Management Framework for a Knowledge Intensive Company: Quality Management Framework to Support the Ongoing Product Development Relocations | |
Hyder et al. | The Capability Model for IT-enabled Outsourcing Service Providers | |
Aygün | Unification of it process models into a simple framework supplemented by Turkish web based application | |
Ebert et al. | Controlling for IT and Software |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |