US9883035B1 - Methods and systems for automatically recognizing actions in a call center environment using screen capture technology - Google Patents
Methods and systems for automatically recognizing actions in a call center environment using screen capture technology Download PDFInfo
- Publication number
- US9883035B1 US9883035B1 US15/479,669 US201715479669A US9883035B1 US 9883035 B1 US9883035 B1 US 9883035B1 US 201715479669 A US201715479669 A US 201715479669A US 9883035 B1 US9883035 B1 US 9883035B1
- Authority
- US
- United States
- Prior art keywords
- event
- electronic device
- mouse
- image
- input data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 123
- 230000009471 action Effects 0.000 title claims abstract description 120
- 238000005516 engineering process Methods 0.000 title description 2
- 230000008569 process Effects 0.000 claims abstract description 92
- 238000004891 communication Methods 0.000 claims abstract description 23
- 230000003993 interaction Effects 0.000 claims abstract description 11
- 230000000875 corresponding effect Effects 0.000 claims description 18
- 230000008859 change Effects 0.000 claims description 16
- 238000010191 image analysis Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 8
- 230000000977 initiatory effect Effects 0.000 claims description 4
- 238000007596 consolidation process Methods 0.000 claims description 3
- 239000003795 chemical substances by application Substances 0.000 description 33
- 238000013480 data collection Methods 0.000 description 29
- 238000000605 extraction Methods 0.000 description 17
- 238000012545 processing Methods 0.000 description 13
- 238000013479 data entry Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 11
- 230000010354 integration Effects 0.000 description 9
- 238000005065 mining Methods 0.000 description 8
- 238000012015 optical character recognition Methods 0.000 description 8
- 230000006399 behavior Effects 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004883 computer application Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 238000012517 data analytics Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000000881 depressing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/51—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
- H04M3/5175—Call or contact centers supervision arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3041—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is an input/output interface
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3438—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G06K9/00355—
-
- G06K9/00771—
-
- G06K9/3233—
-
- G06K9/6215—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42221—Conversation recording systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/51—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
- H04M3/5183—Call or contact centers with computer-telephony arrangements
-
- G06K2209/01—
-
- G06K2209/03—
-
- G06K2209/27—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/02—Recognising information on displays, dials, clocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/10—Recognition assisted with metadata
Definitions
- call center agents In a call center environment, it is important to understand interactions that call center agents have with programs and applications that are used to handle caller inquiries such as, for example, what application or programs a call center agent uses, or more specifically, a sequences of application and program interactions, and a sequence of data field interactions that a call center agent performs during order processing, customer support, technical support, troubleshooting and/or the like.
- action discovery is typically accomplished by consultants who work alongside call center agents to understand the steps of the processes being performed, or who manually review recorded call center agent sessions offline. This requires a large time investment from the consultants, increases the time required to complete a process improvement initiative, and is a hindrance to existing process deliveries. In addition, details of the process can be missed utilizing this approach.
- the methods and systems described in this disclosure overcome many of the deficiencies of known process recognition approaches by implementing an automatic computer-implemented process recognition process that operates seamlessly with call center communications systems technology behind the scenes while a call center agent is performing his or her duties.
- This approach provides an accurate, efficient and cost effective way to automatically recognize processes and actions being performed by a call center agent during sessions without the need for human physical observation. Once these processes and actions are understood, they can be leveraged to identify process inefficiencies and bottlenecks.
- a system for recognizing processes performed by a call center agent during a session may include an electronic device, and a computer-readable storage medium.
- the computer-readable storage medium includes one or more programming instructions that, when executed, cause the electronic device to perform or not perform one or more actions.
- the system collects from one or more input devices in communication with the electronic device, input data that includes data pertaining to one or more interactions that a call center agent has with one or more programs running on the electronic device that cause one or more graphical user interfaces to be displayed on a desktop of the electronic device during a session.
- the system analyzes the input data to generate one or more events, where each event includes a consolidation of at least a portion of the input data.
- the system generates a mid-level event log having one or more of the events, and performs action recognition on the mid-level event log to ascertain one or more actions that were performed within the one or more graphical user interfaces by the call center agent during the session.
- the system performs action recognition on the mid-level event log by, for one or more events in the mid-level event log, identifying a before-event image for the event, identifying an after-event image for the event, performing image analysis on the before-event image and the after-event to generate a delta image showing a region of interest that includes one or more changes between the before-event image and the after-event image, and automatically classifying the event as an action based on the delta image.
- the before-event image is a screen shot of the desktop that was automatically captured by the electronic device in real-time before the event occurred and in anticipation of the event
- the after-event image is a screen shot of the desktop of the electronic device that was automatically captured by the electronic device in real-time after the event occurred.
- the system generates an event log that comprises an indication of the event and corresponding action.
- the system may collect input data by initiating a keyboard logging thread to collect one or more keystrokes that are entered via a keyboard in communication with the electronic device during the session.
- the system may initiate a mouse logging thread to collect data associated with movement or operation of the mouse during the session.
- the system may initiate an active window logging thread to collect data associated with active windows during the session.
- the system may apply one or more heuristic rules to the input data to generate the mid-level event log. For example, the system may identify consecutive keyboard entries in the input data, and in response to determining that a time between each keyboard entry is less than a threshold value, merge the keyboard entries into a single typing event. As another example, the system may identify from the input data representing a mousedown event corresponding to a mouse followed by data representing a mouseup event corresponding to the mouse such that there is no change in a position of the mouse, and consolidate the mousedown event and the mouse up event into a single event represented as a mouse click event.
- the system may identify from the input data consecutive mouse clicks corresponding to a mouse, and consolidate the consecutive mouse clicks into a single event represented as a double click mouse click event.
- the system may identify from the input data a mousedown event followed by a plurality of mouse movement events followed by a mouseup event, and consolidate the mousedown event, the mouse movement events and the mouseup event into a single event represented as a mouse select event.
- the system may identify from the input data consecutive mouse wheel events, and consolidate the consecutive mouse wheel events into a single event represented as a mouse scroll event.
- the system may perform image analysis to generate a delta image by identifying a screen to which the before-event image and the after-event image correspond by comparing the before-event image and the after-event image to one or more screens in a unique screen library.
- the unique screen library stores each unique screen that the call agent may encounter during a session along with metadata associated with each unique screen.
- the system may analyze the screen to identify a region on the screen that corresponds to the region of interest, and analyze the metadata associated with the screen to identify a data field located within the region. Classifying the event as an action based on the delta image may involve classifying the event based on the data field.
- the event may be a mouse click
- the system may analyze the metadata to identify a dropdown menu and classify the event as a select dropdown menu action.
- the system may perform optical character recognition on the dropdown menu.
- Classifying the event as an action based on the delta image may involve performing optical character recognition on the region of interest.
- the system may apply a process discovery technique to the event log to generate a process map showing the actions performed by the call center agent during the session.
- FIG. 1A illustrates an example system for automatically detecting processes according to an embodiment.
- FIG. 1B illustrates a high level system diagram according to various embodiments.
- FIG. 2 illustrates an example method of automatically detecting processes according to an embodiment.
- FIG. 3 illustrates an example method of a smart capture process according to an embodiment.
- FIG. 4A illustrates an example input data log according to an embodiment.
- FIG. 4B illustrates an example visual representation of logged data according to an embodiment.
- FIG. 5 illustrates example input data according to an embodiment.
- FIG. 6 illustrates example input data that includes raw mouse logs, corresponding mid-level event logs and a heuristic rule set according to various embodiments.
- FIG. 7 illustrates an example visual depiction of an action recognition process according to an embodiment.
- FIG. 8 illustrates an example method of performing action recognition according to an embodiment.
- FIG. 9 shows an example of a USL of a call-type in a call center operation according to an embodiment.
- FIG. 10 shows an example of a labeled unique screen of a unique screen library according to an embodiment.
- FIG. 11 illustrates an example before-event image according to an embodiment.
- FIG. 12 illustrates an example after-event image according to an embodiment.
- FIG. 13 illustrates an example changed region according to an embodiment.
- FIG. 14 illustrates an example screen before-event image according to an embodiment.
- FIG. 15 illustrates an example after-event image according to an embodiment.
- FIG. 16 illustrates an example changed region according to an embodiment.
- FIG. 17 illustrates example classifications for a contiguous key input event according to an embodiment.
- FIG. 18 illustrates example classifications for a left mouse click event according to an embodiment.
- FIG. 19 illustrates example classifications for a left mouse select event according to an embodiment.
- FIG. 20 illustrates an example before-event image for a left mouse click event according to an embodiment.
- FIG. 21 illustrates an example after-event image for the left mouse click event according to an embodiment.
- FIGS. 22A and 22B illustrate example crop regions according to an embodiment.
- FIG. 23 illustrates a chart of example mouse velocities and thresholds according to an embodiment.
- FIG. 24 illustrates a block diagram of example hardware that may be used to contain or implement program instructions according to an embodiment.
- FIG. 25 illustrates an example event log according to an embodiment.
- an “after-event image” refers to a screen shot of a desktop of a user electronic device captured after the occurrence of an event.
- a “before-event image” refers to a screen shot of a desktop of a user electronic device captured before the occurrence of an event.
- a “call center” is an organization, location or group of locations that operates to initiate and/or receive inquiries from customers via phone, interactive messaging applications such as chat applications, or other interactive electronic media.
- a call center may process orders, provide customer service, provide technical support, answer inquiries initiated by telephone, email, fax, or from other input sources and/or the like.
- a call center utilizes various hardware, software and network components.
- a call center may include electronic devices, such as desktop or laptop computers, that call agents may use to handle calls. Call center agents may also use headsets to hear call audio.
- Electronic devices in a call center may communicate with one another or other electronic devices using one or more communication networks.
- a communication network may be a local area network (LAN), a wide area network (WAN), a mobile or cellular communication network, an extranet, an intranet, the Internet and/or the like.
- Electronic devices in a call center may run or operate various programs or applications to help call center agents perform their jobs. Examples of such programs or applications include, without limitation, customer relationship management systems, enterprise resource planning systems, workforce management systems, call queuing systems, call recording systems, and/or the like.
- Call centers may include physical telephones via which call center agents place and receive calls. Additionally and/or alternatively, call centers may utilize voice over Internet protocol (VoIP) to place and receive calls. Examples include telemarketing centers, customer technical or product support groups, and centers for receiving customer orders via phone.
- VoIP voice over Internet protocol
- a “call center agent” is a person who is employed to initiate and/or receive communications, such as phone calls, web chats, or other live communications, from the call center's customers.
- a “call” refers to any such communication between a customer and a call center agent.
- a “delta image” refers to an image showing one or more differences between a before-event image for an event and an after-event image for the event.
- An “electronic device” or a “computing device” refers to a device that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement.
- the memory may contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions. Examples of electronic devices include personal computers, servers, mainframes, virtual machines, containers, gaming systems, televisions, and mobile electronic devices such as smartphones, personal digital assistants, cameras, tablet computers, laptop computers, media players and the like.
- the client device and the server are each electronic devices, in which the server contains instructions and/or data that the client device accesses via one or more communications links in one or more communications networks.
- a server may be an electronic device, and each virtual machine or container may also be considered to be an electronic device.
- a client device, server device, virtual machine or container may be referred to simply as a “device” for brevity.
- processor and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions. Except where specifically stated otherwise, the singular term “processor” or “processing device” is intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process.
- memory each refer to a non-transitory device on which computer-readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices.
- a “widget” or “data field” refers to an element of a graphical user interface that displays information or provides a mechanism by which a user, such as a call agent, can interact with a program or application.
- Example data fields may include, without limitation, data entry fields, dropdown or pulldown menus, radial buttons and/or the like.
- a “window” or “screen” refers to a portion of a graphical user interface associated with an application, program or the like.
- a process refers to a set of computer-implemented steps or actions performed by a user to accomplish a particular result.
- a call agent may utilize various computer programs or applications to handle, address or resolve customer inquiries.
- the call agent may gather information pertaining to the inquiry from the caller, and may provide certain information to the computer programs or applications via one or more windows.
- a process may refer to the steps or actions performed in the agent's interaction with the computer programs or applications.
- a window may include one or more static elements and or dynamic elements.
- a static element refers to data that is not changeable by a user of the window.
- Example static elements may include text such as field labels, instructions, notes and/or the like.
- a dynamic element refers to data whose value or content is changeable by a user of the window.
- Example dynamic elements may be widgets such as, for example, text fields, drop down menus, radio buttons and/or the like.
- FIG. 1A illustrates an example system for automatically detecting processes according to an embodiment.
- the system 100 includes a data collection system 102 , an information extraction system 104 , an action recognition system 106 , an integration system 108 , and a process mining system 110 .
- the data collection system 102 collects input data from a user electronic device.
- a user electronic device refers to an electronic device associated with a user who is performing at least a portion of one or more processes that is being discovered.
- a user electronic device may refer to a desktop or laptop computer used by a call center agent who provides customer or technical support. Additional and/or alternate electronic devices may be used within the scope of this disclosure.
- a data collection system 102 may include interfaces to one or more input devices of a user electronic device. For instance, a data collection system 102 may interface with and collect input data from a keyboard, mouse, a video camera, a touch-sensitive screen or other input device of a user electronic device. Input data may include, without limitation, keyboard input, mouse input such as mouse click data and coordinate or positional information associated with the mouse, application data such as names associated with active windows, screen shots, and/or the like. As described in more detail below, a data collection system 102 can utilize smart capture capability so that it captures only relevant data of interest.
- a data collection system includes memory for storing input data or other information provided to the data collection system.
- the data collection system may also include an internal clock for providing time and date stamps for the collected information.
- a data collection system 102 may reside on a user electronic device. Alternatively, a data collection system 102 may be distributed, but communicate with the user electronic device via one or more networks such as a local area network or a wide area network, for example the Internet, the World Wide Web or the like.
- An information extraction system 104 analyzes at least a portion of the input data collected by the data collection system 102 to generate one or more mid-level event logs.
- the information extraction system 104 uses various techniques, such as image and data analytics as described below, to infer from input data actions performed in one or more processes and identify events at different levels of granularity. For instance, as described in more detail below, an information extraction system 104 may transform separate input data or logs of input data into one or more mid-level event logs.
- An action recognition system 106 utilizes mid-level event logs in conjunction with image analysis to facilitate action recognition as described in more detail below.
- the integration system 108 refines log information using one or more heuristic rules as described in more detail below.
- FIG. 1A illustrates the action recognition system 106 and integration system 108 as separate from the information extraction system 104 , it is understood that the action recognition system and/or the integration system may be implemented as a subsystem to an information extraction system, as illustrated by FIG. 1B .
- FIG. 1B illustrates a high level system diagram according to various embodiments.
- a processing mining system 110 analyzes the logs generated by the information extraction system 104 to develop one or more process maps or other information pertaining to one or more processes.
- FIG. 2 illustrates an example method of automatically recognizing processes according to an embodiment.
- the system may collect 200 input data from a user electronic device.
- Input data may be data that pertains to one or more interactions that a call center agent has with a user electronic device during a session.
- a data collection system may collect 200 input data from a user electronic device. The data collection system monitors user-induced system activity and collects data pertaining to such activity.
- a user electronic device may contain and operate many different programs and applications that are provided by many different providers.
- a data collection system may not have direct access to such programs or applications, whether through APIs or otherwise. As such, a data collection system collects input data by direct communication with the operating system of a user electronic device.
- a data collection system may collect input data via one or more threads.
- a thread refers to a sequence of programming instructions that cause an electronic device to perform one or more actions.
- a data collection system may initiate one or more threads.
- One such thread may be a keyboard logging thread.
- a keyboard logging thread collects input data received by a user electronic device from a keyboard and an associated time stamp for such input data.
- the input data may include keystrokes and multi-keystroke combinations such as, for example, Ctrl-c, Ctrl-alt-s and/or the like.
- Another thread may be a mouse logging thread.
- a mouse logging thread collects input data received by a user electronic device from a mouse and an associated time stamp for such input data.
- the input data may include mouse position coordinates, mouse actions such as left-click, right-click, and scroll.
- the input data may include a name of a window over which a mouse hovers or is otherwise positioned. This may or may not be an active window.
- Another thread may be an active window logging thread.
- An active window logging thread collects information about an active window at a particular point in time.
- An active window refers to a currently focused window of a user electronic device.
- Information about an active window may include an indication of when an active window is changed, a timestamp associated with a change, one or more coordinates of an active window, active window changes in coordinates, active window title, and one or more timestamps associated with any of the foregoing.
- Another thread may be a screen capture thread.
- a screen capture thread captures and collects screen shots of a display of a user electronic device at a particular point in time. The screen shot includes an image of the user electronic device desktop at a certain time.
- a screen capture thread may only capture those screen shots of interest to assist in identifying activities and usage behavior.
- an event of interest may be an agent-initiated change of an active window.
- a screen capture thread may perform a smart capture process.
- a smart capture process anticipates an event of interest and in response, begins to collect screen shots in real-time at a high sampling rate. The screen capture thread stops sampling after the event has occurred.
- a screen capture thread may anticipate that an agent is about to change the active window on a user electronic device.
- the screen capture thread may collect screen shots in real-time at a high sampling rate after the active window has changed. As such, the screen capture thread captures screen shots of the previous active window and the new active window.
- FIG. 3 illustrates an example method of a smart capture process according to an embodiment.
- a data collection system may collect 300 the (x,y) pixel coordinates of the mouse over a period of time.
- the data collection system may use the coordinates to estimate 302 the mouse velocity.
- the data collection system may trigger 306 a screen capture.
- a screen capture may capture one or more screen shots of the desktop of a user electronic device.
- a screen shot may capture one screen shot, or it may capture several screen shots such as a burst of screen shots over a short period of time.
- this process may be performed each time the mouse velocity falls below a threshold value. Alternatively or additionally, this process may be performed at a certain time after the system detects a mouse click event.
- a mouse click event may be indicative that an action of interest such as, for example, the opening of a window, the closing of a window, depressing or toggling a switch, the typing of information, switching a window, or switching a tab, is about to occur. As such, the system may attempt to capture screen shots around that event.
- the data collection system may utilize a second threshold value that is greater than the first threshold value.
- the second threshold value may need to be exceeded before the data collection system permits another screen capture.
- the threshold values discussed above may be determined heuristically or experimentally.
- the threshold values may be scaled from one call center agent's desktop to another based on a screen resolution setting (e.g., using different thresholds for 1600 ⁇ 900 vs. 800 ⁇ 600 screen settings).
- the threshold values may be set to low for a given call center agent at the beginning (more screen capture) while changing the values over time as the system learns more about the habit of the call center agent by analyzing the screen captures.
- a first threshold value may be 180 pixels per second
- a second threshold value may be 215 pixels per second.
- the data collection system may trigger a screen capture. However, the data collection system may not trigger another screen capture until the mouse velocity increases above 215 pixels per seconds and then once again falls below 180 pixels per second. This way, if the velocity increases to a level less than 215 pixels and then falls below 180 pixels due to noise, the data collection system does not perform unnecessary screen captures.
- FIG. 23 illustrates a chart of example mouse velocities and thresholds according to an embodiment.
- a screen capture thread may sample screen capture images at a certain period of time or at certain intervals. For example, a screen capture thread may sample screen capture images every half second. Additional and/or alternate time periods may be used within the scope of this disclosure.
- the system may compare one or more screen capture images to one or more preceding screen capture images that were captured. For instance, the system may compare a screen capture image to a screen capture image that was captured immediately before it.
- the system may determine whether the screen capture images are substantially similar. For instance, the system may compare the layout and/or content of the screen capture images to determine whether they are substantially different. Screen capture images may be substantially different if a number of pixels that have changed between the two exceeds a threshold value.
- the system may delete one of the screen capture images from memory. This reduces the amount of memory used.
- the system may keep both screen capture images.
- the smart capture process described by FIG. 3 allows the system to collect and store targeted screen shots, which reduces the processing capacity needed to analyze such screen shots as well as the memory needed to store such screen shots.
- a data collection system stores 202 the input data collected by one or more of the threads.
- the data collection system stores 202 the input data in a database, table or other data structure in memory.
- Each item of input data may be stored as a separate entry in memory.
- the items may be ordered sequentially based on their corresponding time stamp.
- FIG. 4A illustrates an example input data log according to an embodiment.
- FIG. 4B illustrates an example visual representation of logged data according to an embodiment.
- a data collection system provides 204 at least a portion of the stored input data to an information extraction system.
- a data collection system may send at least a portion of the stored input data to an information extraction system, or it may make such input data accessible to the information extraction system.
- An information extraction system analyzes 206 the input data to generate one or more mid-level event logs.
- Mid-level event logs may include a portion of input data that has been merged, consolidated or otherwise transformed, which may be represented as an event.
- a mid-level event log includes an indication of one or more events, where each event represents an indication of one or more interactions that a call center agent had with a desktop during a session.
- An information extraction system analyzes 206 input data by analyzing temporal proximity of items of input data.
- An information extraction system analyzes 206 input data by applying one or more heuristic rules to the input data.
- input data representing consecutive keyboard entries may be merged into a single typing event if the time between each consecutive keyboard entry is less than 1 second.
- FIG. 5 illustrates example input data according to an embodiment.
- the input data 500 includes multiple individual time-stamped entries indicating that certain characters were entered.
- Each entry includes an indication of the date that the entry was received by a user electronic device, a time that the entry was received by the user electronic device, and an indication of the character that was received.
- the first entry indicates that the character ‘w’ was received at 12:34:18:854 on Jul. 18, 2016.
- a heuristic rule may specify that consecutive keyboard entries are to be merged into a single mid-level event log if the time between each consecutive keyboard entry is less than 1 second. Applying this heuristic rule to the input data illustrated by FIG. 5 yields the mid-level event log 502 illustrated by FIG. 5 . As the time stamp for each input data entry is within one second from the previous entry, the characters represented by the input data are merged into a single event log. As shown by FIG. 5 , the mid-level event log includes an indication of a start time, end time, and content. The start time represents the date and/or time that the first data entry of the mid-level event log is received, and the end time represents the data and/or time that the last data entry of the mid-level event log is received.
- An information extraction system may maintain and store a set of heuristic rules that it may apply to input data.
- the set of heuristic rules may be stored in memory in, for example, a database, a list, a table and/or the like.
- one or more heuristic rules may be specified by a system user, operator, administrator and/or the like.
- FIG. 6 illustrates example input data that includes raw mouse logs, corresponding mid-level event logs and a heuristic rule set according to various embodiments.
- Applying the heuristic rules 600 illustrated in FIG. 6 to the input data 602 , 604 illustrated in FIG. 6 yields the mid-level event logs 606 , 608 illustrated in FIG. 6 .
- applying the heuristic rules to the input data transforms the raw mouse data (e.g., mouse down (left, middle, right), mouse up (left, middle, right), and mouse move) to discrete events (e.g., mouse click (left or right), mouse double click (left or right), mouse scroll (left or right), and mouse select).
- the information extraction system may remove data from the input data that does not correlate to one or more heuristic rules as this data is likely non-meaningful to process identification.
- the information extraction system may provide 208 one or more of the mid-level event logs to an action recognition system.
- An action recognition system may perform action recognition 210 on the mid-level event logs using image analysis and analysis of select known details of the mid-level event logs.
- FIG. 7 illustrates an example visual depiction of the action recognition process.
- FIG. 8 illustrates an example method of performing action recognition according to an embodiment.
- the action recognition system may identify 800 a pair of images corresponding to the event.
- the pair of images includes a before-event image and an after-event image.
- the before-event image may be the nearest available image capture prior to the start of the event.
- the after-event image may be the nearest available image capture after the end of the event.
- the action recognition system analyzes the pair of images to classify 802 the event.
- Classifying an event may involve assigning the event a specific action identifier and a specific widget identifier.
- An action identifier refers to an indication of an action that a user performed to a window or windows as part of the event.
- Example action identifiers may include, without limitation, select, click, change, data entry, scroll, start/switch, move/resize, keyboard, mouse and/or the like.
- a widget identifier refers to an indication of widget type associated with the event and the action. For instance, “Select” is an example of an action identifier, and “Dropdown” is an example of a widget identifier. This notation indicates that a user selected a dropdown widget from a window.
- Event classifications are referred to throughout this disclosure in the format “ ⁇ Action identifier>- ⁇ Widget identifier>” or ⁇ Action identifier>, ⁇ Widget identifier>
- FIG. 17 illustrates example classifications for a contiguous key input event according to an embodiment.
- FIG. 18 illustrates example classifications for a left mouse click event according to an embodiment.
- FIG. 19 illustrates example classifications for a left mouse select event. Additional and/or alternate events, classifications or combinations of events and classifications may be used within the scope of this disclosure.
- the action recognition system may use various techniques and combinations of techniques such as, for example, optical character recognition (OCR), form recognition, widget identification and/or the like. These techniques may be used in combination with other event log information and heuristic rules to facilitate action recognition and classify 802 an event.
- OCR optical character recognition
- form recognition widget identification and/or the like.
- the action recognition system determines a delta image associated with the before-event image and the after-event image.
- a keyboard event refers to an event associated with input data that was received by a keyboard of a user electronic device.
- the action recognition system analyzes the delta image to identify regions of change due to the keyboard event. For instance, an action recognition system may compare a before-event image to an after-event image to determine one or more changed regions of the image.
- a changed region refers to a region of the after-event image that is different than the same region in the before-event image.
- a change may be due to a user, such as an agent, entering text into a data field, selecting an option from a drop down menu, and/or the like.
- the system may use form recognition to analyze images. For instance, each time that the system encounters an image of a window, it may determine whether the window corresponds to a form that has been encountered before by the system. The system may make this determination by referencing a unique screen library (USL) maintained by the system.
- USL contains information relevant to unique windows, screens or applications that are accessible by a user of a user electronic device such as, for example, how many unique windows exist and their layouts, the image appearance of the windows and/or the like.
- a USL may also contain metadata associated with one or more screens such as, for example, coordinates of widgets located on the screen or other layout information.
- FIG. 9 shows an example of a USL of a call-type in a call center operation.
- the USL has a hierarchical structure.
- the top layer contains the list of unique screens represented by their names.
- various metadata can be collected and stored. For example, it may contain a sample image for template matching, a set of image features such as location and description of speeded up robust feature points, a set of layout information indicating types, positions, and sizes of known widgets (e.g., text field, pull-down menu, radio button, check box, etc.) in the given unique screen. Additional information about each unique screen can also be stored elsewhere for later retrieval.
- Once a screen is recognized its layout information is analyzed for action recognition. For instance, the metadata associated with a screen may be used to identify one or more widgets present within a region of interest.
- FIG. 10 shows an example of a labeled unique screen of the USL illustrated in FIG. 9 , and an example of meta-data about the layout stored in the USL.
- a USL may include one or more rules followed by the unique screen.
- An example rule may be that a pull-down menu will appear whenever a mouse click happens within a box region.
- the action recognition system may utilize a USL to identify more information about the changed regions. For example, a USL may provide insight into what data fields are present within a changed region, and this information may help the action recognition system classify the event. For example, if the action recognition system determines that a changed region includes data-entry fields, the action recognition system may classify the event as a “DataEntry” action. Depending on how many data-entry fields of changes are detected in the delta image, the action recognition system may assign the event a widget identifier of “Field” or “Multiple Fields.”
- FIG. 7 illustrates example classifications of events according to various embodiments.
- FIG. 11 illustrates an example before-event image
- FIG. 12 illustrates an example after-event image
- FIG. 14 illustrates an example screen before-event image
- FIG. 15 illustrates an example after-event image
- An action recognition system may process mouse events depending on the particular type of mouse event that is at issue.
- a mouse event refers to an event associated with input data that was received by a mouse of a user electronic device.
- an action recognition system may identify a before-event image and an after-event image, and compute a delta image from the images. From the delta image, the action recognition system may determine a changed region as described above. If the changed region exceeds a threshold value, the action recognition system classifies the mouse event as “Scroll—Pop-up/New Window.”
- the threshold value may represent the number of pixels that have changed between the before-event image and after-event image and is greater than a numeric threshold (e.g., a nonzero positive numeric threshold may be chosen to minimize false positives due to image noise).
- the action recognition system removes the mid-level event log for the mouse event as it is unlikely that any significant change occurred.
- the action recognition system compares a mouse position where the double click occurred to the coordinates of an active window in a before-event image. If the mouse position is not within the active window, the action recognition system classifies the mouse event as “Change—Screen.” If the mouse position is within the active window, the action recognition system uses the mouse position and image analysis of the before-event image and the delta image to classify the mouse event into one of the following classifications: “Select—Field” (if highlighted text is detected around the mouse click position in the before-image); “Resize—Pop-up/New Window” (if the active window size changed and the mouse click is on a top portion of the active window); and “Mouse—Others” (for any other conditions).
- an action recognition system For a “left mouse click” mouse event, an action recognition system compares a mouse position where the click occurred to the coordinates of an active window in a before-event image. If the mouse position is outside of the active window, the action recognition system classifies the event as “Change—Pop-up/New Window.” If the mouse position is within the active window, the action recognition system uses the mouse position and image analysis of the before-event image and the delta image to classify the mouse event.
- an action recognition system may identify a region around the area of the mouse click. If the region includes a widget, the action recognition system classifies the mouse event as “Click— ⁇ widget name>.” If the region does not include a widget, then the action recognition system analyzes the delta image to identify one or more regions of change due to the mouse event. If the regions of change include a pulldown menu, the action recognition system determines whether the pulldown menu is present in the before-event image but not in the after-event image. If so, the action recognition system classifies the mouse event as “Select—PullDown.” The action recognition system may use OCR on the region to determine which item from the pulldown menu is selected.
- the action recognition system determines that the pulldown menu is present in the after-event image but not in the before-event image, the action recognition system classifies the mouse event as “Click—Pulldown.”
- the action recognition module may record the label or title of the pulldown (which may be obtained from form recognition) along with the classification to identify which pulldown menu was activated.
- the action recognition system determines that the pulldown menu is present in both the before-event image and the after-event image, classifies the mouse event as “Mouse—Others.” This classification may indicate events that are not known, recognized or not encountered frequently.
- the action recognition system may perform OCR on the region of the before-event image near the mouse click position. If no text is found, the action recognition system classifies the event as “Mouse—Others.” If text is found, then it is likely that the region includes a clickable or selectable button. If the active window names are the same in the before-event image and the after-event image, the action recognition system classifies the event as “Click—Button.” If the active window names are different in the before-event image and the after-event image, the action recognition system classifies the mouse event as “Click—DialogBox.” The action recognition system may also record the name or title of the button or dialog box to identify which button or dialog box has been selected.
- FIG. 20 illustrates an example before-event image for a left mouse click event according to an embodiment.
- FIG. 21 illustrates an example after-event image for the left mouse click event according to an embodiment.
- FIGS. 22A and 22B illustrate crop regions of the before-event image and the after-event image, respectively, that are identified from the delta image for the event. As illustrated by FIGS. 22A and 22B , the before-event image does not include a list-item, while the after-event image includes a list-item image. As such, the action recognition system classifies the event as a “Click—Pulldown” event.
- the action recognition system uses a start mouse position and an end mouse position to crop out a segment of the before-event image.
- the action recognition system performs image analysis to confirm whether the selected region includes highlighted text. If it does not, the action recognition system classifies the event as “Mouse—Others.” If it does, the action recognition system classifies the event as “Select—Field.”
- the action recognition system may extract the detail information of what text has been selected via OCR and recorded as detailed information of the event.
- a “right mouse click” event nay be unusual, but if encountered, it is likely that a pop-up selection menu will be present in the after-event image.
- the action recognition system analyzes the delta image to find the regions of change and confirm the presence of a list-item. If confirmed, the action recognition system classifies the event as “Click—Pop-up/New Window”—meaning that a click action that causes a pop-up window to appear. If not confirmed, the event is classified as “Mouse—Others.”
- a “right mouse select” is rarely occurred and is likely due to an extra click.
- the action recognition system treats the event as a “right mouse click” and analyzes it the same way.
- an integration system may resolve 212 one or more events having a widget identifier “Others.”
- An integration system may resolve 212 an event having a widget identifier “Others” using one or more heuristic rules. For example, an event classified as “Click—Pulldown” is expected to be followed by an event that is classified as “Select—Pulldown.” As such, if there is an event classified as “Mouse—Others” or “Keyboard—Others” that immediately follows an event classified as “Click—Pulldown” and no event classified as “Select—Pulldown” follows it within a time period, then the integration system will update the classification “Mouse—Others” to “Select—Pulldown.”
- a left mouse select or left mouse double click event that is classified as “Select—Field” is likely to be followed by a copy and paste action, such as CTRL-C and CTRL-V. If the integration system identifies a copy and paste event from a log that is not preceded by an event labeled “Select—Field” in the log (e.g., during a certain time period), the integration system may change the nearest event classified as “Mouse—Others” that occurred prior to the copy and paste to a classification of “Select—Field.”
- a process mining system may use the mid-level event logs generated by the information extraction system and/or the classifications generated by the action recognition system to generate 214 one or more event logs.
- FIG. 25 illustrates an example event log according to an embodiment.
- the event logs may capture events at different levels of granularity. For example, an event log may capture events where the activities are the application names at a high level. Alternatively, an event log may capture actions at a much finer widget action level.
- An event log may include an indication of one or more steps of a process. The indication may be in the form of a process flow or a process map.
- the process mining system may include one or more subsystems.
- Example subsystems may include, without limitation, a process map discovery subsystem, a process performance subsystem, and a conformance subsystem.
- the process map discovery subsystem may apply one or more process discovery techniques to one or more event logs to identify one or more processes.
- Example process discovery techniques include, without limitation, Petri nets, EPC process modeling, BPMN process modeling, heuristicnets and/or the like.
- a process map discovery subsystem discovers the control-flow of processes from the event logs.
- the process map discovery subsystem may generate corresponding process maps based on the control-flow.
- the process map discovery subsystem may identify hierarchical process maps with seamless zoom-in/out facility.
- Event logs may be captured at different levels of granularity. For coarser granular events, the system can capture detailed sub-events performed underneath the event itself such as, for example, actions that are performed within an application. These sub-events may be used to capture the sub-process running underneath the coarse activity. Upon zooming on the coarse activity, the system may discover the subprocess as well.
- a process map may be enriched with additional insightful information. For example, edges and nodes may be annotated with frequency information. The thickness of an edge may be used to depict a relative frequency of traversal.
- a process map may quickly highlight the highways or paths of the process.
- a process performance subsystem may analyze different perspectives of process performance from one or more event logs. For example, a process performance subsystem may analyze flow-times between successive states of the process, the average flow-time and flow-time distributions between successive flows, and/or the like.
- the process map may be annotated based on this analysis. For example, bottleneck-prone flows may be highlighted in red, and upon clicking any flow, the flow-time distribution along with other performance statistics may be displayed.
- a process performance subsystem may analyze flow-times between any two states of the process. For instance, the average flow-time along with flow-time distributions between any two states or transitions of a process may be computed and displayed. If a process model has “N” activities, the process performance subsystem may depict an N ⁇ N triangular matrix, where each cell (I,J) depicts the average flow-time between activities I and J. Further with rich interaction, the subsystem may show distribution of flow-times and other statistics between any pair of activities upon clicking on any cell.
- a process performance subsystem may analyze the average execution time of one or more tasks/activities in the process and compute and display the corresponding execution time distributions.
- a process performance subsystem may analyze the behavior of resources on various tasks. For example, the subsystem may generate resource x task M ⁇ N matrices where there are “M” resources and “N” tasks who worked on the process. Different matrices capturing different aspects of resource behavior on the tasks are captured, e.g., one matrix captures the frequency of execution, another captures the average execution time of a resource on a particular task, etc. Upon clicking a cell in the matrix, further details can be shown e.g., the execution time distribution of resource on a particular task etc.
- a process performance subsystem may check for compliance of process execution with respect to expected behavior.
- Expected behavior may be specified in the form of process models, business rules and/or the like.
- the subsystem may analyze the actual behavior as compared to the expected behavior and capture the deviations or anomalies, which it may present to a user, such as a business analyst.
- a process mining system may offer one or more recommendations or suggestions for process improvement. For instance, a process mining system may identify one or more process bottlenecks which may be redundant to the process. The process mining system may recommend that these bottlenecks be removed. The process mining system may present one or more recommendations to a user such as, for example, an administrator.
- FIG. 24 depicts a block diagram of hardware that may be used to contain or implement program instructions, such as those of a cloud-based server, electronic device, virtual machine, or container.
- a bus 2400 serves as an information highway interconnecting the other illustrated components of the hardware.
- the bus may be a physical connection between elements of the system, or a wired or wireless communication system via which various elements of the system share data.
- Processor 2405 is a processing device that performs calculations and logic operations required to execute a program.
- Processor 2405 alone or in conjunction with one or more of the other elements disclosed in FIG. 24 , is an example of a processing device, computing device or processor as such terms are used within this disclosure.
- the processing device may be a physical processing device, a virtual device contained within another processing device, or a container included within a processing device.
- a memory device 2410 is a hardware element or segment of a hardware element on which programming instructions, data, or both may be stored.
- ROM Read only memory
- RAM random access memory
- An optional display interface 2430 may permit information to be displayed on the display 2435 in audio, visual, graphic or alphanumeric format. Communication with external devices, such as a printing device, may occur using various communication devices 2440 , such as a communication port or antenna.
- a communication device 2440 may be communicatively connected to a communication network, such as the Internet or an intranet.
- the hardware may also include a user input interface 2445 which allows for receipt of data from input devices such as a keyboard or keypad 2450 , or other input device 2455 such as a mouse, a touch pad, a touch screen, a remote control, a pointing device, a video input device and/or a microphone. Data also may be received from an image capturing device 2420 such as a digital camera or video camera.
- a positional sensor 2415 and/or motion sensor 2465 may be included to detect position and movement of the device. Examples of motion sensors 2465 include gyroscopes or accelerometers.
- An example of a positional sensor 2415 is a global positioning system (GPS) sensor device that receives positional data from an external GPS network.
- GPS global positioning system
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Marketing (AREA)
- Business, Economics & Management (AREA)
- Quality & Reliability (AREA)
- Computing Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/479,669 US9883035B1 (en) | 2017-02-02 | 2017-04-05 | Methods and systems for automatically recognizing actions in a call center environment using screen capture technology |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762453596P | 2017-02-02 | 2017-02-02 | |
US15/479,669 US9883035B1 (en) | 2017-02-02 | 2017-04-05 | Methods and systems for automatically recognizing actions in a call center environment using screen capture technology |
Publications (1)
Publication Number | Publication Date |
---|---|
US9883035B1 true US9883035B1 (en) | 2018-01-30 |
Family
ID=61005598
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/479,669 Active US9883035B1 (en) | 2017-02-02 | 2017-04-05 | Methods and systems for automatically recognizing actions in a call center environment using screen capture technology |
US15/479,678 Active US9998598B1 (en) | 2017-02-02 | 2017-04-05 | Methods and systems for automatically recognizing actions in a call center environment using screen capture technology |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/479,678 Active US9998598B1 (en) | 2017-02-02 | 2017-04-05 | Methods and systems for automatically recognizing actions in a call center environment using screen capture technology |
Country Status (1)
Country | Link |
---|---|
US (2) | US9883035B1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108427582A (en) * | 2018-03-09 | 2018-08-21 | 北京小米移动软件有限公司 | Interim card state determines method, apparatus and computer readable storage medium |
US20190141185A1 (en) * | 2015-06-30 | 2019-05-09 | Avaya Inc. | Microphone monitoring and analytics |
US20190166211A1 (en) * | 2017-11-28 | 2019-05-30 | Palo Alto Research Center Incorporated | User interaction analysis of networked devices |
EP3518140A1 (en) * | 2018-01-24 | 2019-07-31 | Wistron Corporation | Image data retrieving method and image data retrieving device |
CN113449196A (en) * | 2021-07-16 | 2021-09-28 | 北京天眼查科技有限公司 | Information generation method and device, electronic equipment and readable storage medium |
US20210349941A1 (en) * | 2020-05-05 | 2021-11-11 | Skan Inc. | Assigning case identifiers to video streams |
US20210350134A1 (en) * | 2020-05-05 | 2021-11-11 | Skan Inc. | Generating event logs from video streams |
CN114430823A (en) * | 2019-11-06 | 2022-05-03 | 西门子股份公司 | Software knowledge capturing method, device and system |
US20220329686A1 (en) * | 2019-09-05 | 2022-10-13 | Sony Group Corporation | Application extension program, information processing apparatus, and method |
US20230028654A1 (en) * | 2020-01-08 | 2023-01-26 | Nippon Telegraph And Telephone Corporation | Operation log acquisition device and operation log acquisition method |
US12277152B2 (en) | 2022-06-23 | 2025-04-15 | Vertiv Corporation | System and method for serial-over-IP switch based character string pattern matching and detection |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109583472A (en) * | 2018-10-30 | 2019-04-05 | 中国科学院计算技术研究所 | A kind of web log user identification method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100205561A1 (en) | 2009-02-06 | 2010-08-12 | Primax Electronics Ltd. | Mouse having screen capture function |
US20130016115A1 (en) * | 2011-07-13 | 2013-01-17 | Incontact, Inc. | Activity recording of contact handling system agents |
US8463606B2 (en) * | 2009-07-13 | 2013-06-11 | Genesys Telecommunications Laboratories, Inc. | System for analyzing interactions and reporting analytic results to human-operated and system interfaces in real time |
US20140095600A1 (en) | 2012-09-28 | 2014-04-03 | Bradford H. Needham | Multiple-device screen capture |
US20140222995A1 (en) | 2013-02-07 | 2014-08-07 | Anshuman Razden | Methods and System for Monitoring Computer Users |
US20150378577A1 (en) * | 2014-06-30 | 2015-12-31 | Genesys Telecommunications Laboratories, Inc. | System and method for recording agent interactions |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015073723A1 (en) | 2013-11-13 | 2015-05-21 | Envision Telephony, Inc. | Systems and methods for desktop data recording for customer-agent interactions |
-
2017
- 2017-04-05 US US15/479,669 patent/US9883035B1/en active Active
- 2017-04-05 US US15/479,678 patent/US9998598B1/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100205561A1 (en) | 2009-02-06 | 2010-08-12 | Primax Electronics Ltd. | Mouse having screen capture function |
US8463606B2 (en) * | 2009-07-13 | 2013-06-11 | Genesys Telecommunications Laboratories, Inc. | System for analyzing interactions and reporting analytic results to human-operated and system interfaces in real time |
US20130016115A1 (en) * | 2011-07-13 | 2013-01-17 | Incontact, Inc. | Activity recording of contact handling system agents |
US20140095600A1 (en) | 2012-09-28 | 2014-04-03 | Bradford H. Needham | Multiple-device screen capture |
US20140222995A1 (en) | 2013-02-07 | 2014-08-07 | Anshuman Razden | Methods and System for Monitoring Computer Users |
US20150378577A1 (en) * | 2014-06-30 | 2015-12-31 | Genesys Telecommunications Laboratories, Inc. | System and method for recording agent interactions |
Non-Patent Citations (2)
Title |
---|
Information about Related Patents and Patent Applications, see section 6 of the accompanying Information Disclosure Statement Letter, which concerns Related Patents and Patent Applications. |
U.S. Appl. No. 15/479,678, filed Apr. 5, 2017, Methods and Systems for Automatically Recognizing Actions in a Call Center Environment Using Screen Capture Technology. |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190141185A1 (en) * | 2015-06-30 | 2019-05-09 | Avaya Inc. | Microphone monitoring and analytics |
US20190166211A1 (en) * | 2017-11-28 | 2019-05-30 | Palo Alto Research Center Incorporated | User interaction analysis of networked devices |
US10567526B2 (en) * | 2017-11-28 | 2020-02-18 | Palo Alto Research Center Incorporated | User interaction analysis of networked devices |
EP3518140A1 (en) * | 2018-01-24 | 2019-07-31 | Wistron Corporation | Image data retrieving method and image data retrieving device |
US11017254B2 (en) | 2018-01-24 | 2021-05-25 | Wistron Corporation | Image data retrieving method and image data retrieving device |
CN108427582A (en) * | 2018-03-09 | 2018-08-21 | 北京小米移动软件有限公司 | Interim card state determines method, apparatus and computer readable storage medium |
CN108427582B (en) * | 2018-03-09 | 2022-02-08 | 北京小米移动软件有限公司 | Method and device for determining stuck state and computer readable storage medium |
US12075000B2 (en) * | 2019-09-05 | 2024-08-27 | Sony Group Corporation | Application extension program, information processing apparatus, and method |
US20220329686A1 (en) * | 2019-09-05 | 2022-10-13 | Sony Group Corporation | Application extension program, information processing apparatus, and method |
CN114430823A (en) * | 2019-11-06 | 2022-05-03 | 西门子股份公司 | Software knowledge capturing method, device and system |
US20230028654A1 (en) * | 2020-01-08 | 2023-01-26 | Nippon Telegraph And Telephone Corporation | Operation log acquisition device and operation log acquisition method |
US20210350134A1 (en) * | 2020-05-05 | 2021-11-11 | Skan Inc. | Generating event logs from video streams |
US20210349941A1 (en) * | 2020-05-05 | 2021-11-11 | Skan Inc. | Assigning case identifiers to video streams |
US11704362B2 (en) * | 2020-05-05 | 2023-07-18 | Skan Inc. | Assigning case identifiers to video streams |
US11710313B2 (en) * | 2020-05-05 | 2023-07-25 | Skan Inc. | Generating event logs from video streams |
CN113449196B (en) * | 2021-07-16 | 2024-04-19 | 北京金堤科技有限公司 | Information generation method and device, electronic equipment and readable storage medium |
CN113449196A (en) * | 2021-07-16 | 2021-09-28 | 北京天眼查科技有限公司 | Information generation method and device, electronic equipment and readable storage medium |
US12277152B2 (en) | 2022-06-23 | 2025-04-15 | Vertiv Corporation | System and method for serial-over-IP switch based character string pattern matching and detection |
Also Published As
Publication number | Publication date |
---|---|
US9998598B1 (en) | 2018-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9883035B1 (en) | Methods and systems for automatically recognizing actions in a call center environment using screen capture technology | |
US11811805B1 (en) | Detecting fraud by correlating user behavior biometrics with other data sources | |
US11798209B1 (en) | Systems and methods for rendering a third party visualization in response to events received from search queries | |
JP6549806B1 (en) | Segment the content displayed on the computing device into regions based on the pixel of the screenshot image that captures the content | |
US10762277B2 (en) | Optimization schemes for controlling user interfaces through gesture or touch | |
US20180234328A1 (en) | Service analyzer interface | |
US10038785B1 (en) | Methods and systems for automatically recognizing actions in a call center environment using video data | |
US20150058092A1 (en) | Dashboard for dynamic display of distributed transaction data | |
WO2015157137A1 (en) | Scenario modeling and visualization | |
CN111158831A (en) | Data processing method, device, equipment and medium based on instant messaging application | |
CN103714450A (en) | Natural language metric condition alerts generation | |
CN107229604B (en) | Transaction record information display method and device and computer readable storage medium | |
US20160042051A1 (en) | Decomposing events from managed infrastructures using graph entropy | |
CN112817817A (en) | Buried point information query method and device, computer equipment and storage medium | |
US20180276471A1 (en) | Information processing device calculating statistical information | |
US20180300572A1 (en) | Fraud detection based on user behavior biometrics | |
US20210201237A1 (en) | Enhanced user selection for communication workflows using machine-learning techniques | |
US11315010B2 (en) | Neural networks for detecting fraud based on user behavior biometrics | |
US20220318319A1 (en) | Focus Events | |
US20220365861A1 (en) | Automated actions based on ranked work events | |
CN112000863A (en) | User behavior data analysis method, device, equipment and medium | |
US20190087303A1 (en) | System, method, and apparatus for gathering information | |
CN112182295A (en) | Business processing method and device based on behavior prediction and electronic equipment | |
EP3385845A1 (en) | Tags for automatic cloud resource provisioning | |
US20230394030A1 (en) | Generating event logs from video streams |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CONDUENT BUSINESS SERVICES, LLC, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KULKARNI, RAKESH S.;WU, WENCHENG;PRABHAKARA, JAGADEESH CHANDRA BOSE RANTHAM;AND OTHERS;SIGNING DATES FROM 20170320 TO 20170331;REEL/FRAME:041861/0744 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK Free format text: SECURITY AGREEMENT;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:050326/0511 Effective date: 20190423 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: CONDUENT HEALTH ASSESSMENTS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT CASUALTY CLAIMS SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT BUSINESS SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT COMMERCIAL SOLUTIONS, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: ADVECTIS, INC., GEORGIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT TRANSPORT SOLUTIONS, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT STATE & LOCAL SOLUTIONS, INC., NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 Owner name: CONDUENT BUSINESS SERVICES, LLC, NEW JERSEY Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:057969/0180 Effective date: 20211015 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057970/0001 Effective date: 20211015 Owner name: U.S. BANK, NATIONAL ASSOCIATION, CONNECTICUT Free format text: SECURITY INTEREST;ASSIGNOR:CONDUENT BUSINESS SERVICES, LLC;REEL/FRAME:057969/0445 Effective date: 20211015 |