+

US20190012841A1 - Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids - Google Patents

Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids Download PDF

Info

Publication number
US20190012841A1
US20190012841A1 US16/030,788 US201816030788A US2019012841A1 US 20190012841 A1 US20190012841 A1 US 20190012841A1 US 201816030788 A US201816030788 A US 201816030788A US 2019012841 A1 US2019012841 A1 US 2019012841A1
Authority
US
United States
Prior art keywords
data
training
user
fixation
distortion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/030,788
Inventor
Brian Kim
David A. Watola
Jay E. Cormier
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eyedaptic Inc
Original Assignee
Eyedaptic, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyedaptic, Inc. filed Critical Eyedaptic, Inc.
Priority to US16/030,788 priority Critical patent/US20190012841A1/en
Publication of US20190012841A1 publication Critical patent/US20190012841A1/en
Priority to US16/727,564 priority patent/US11043036B2/en
Priority to US17/354,830 priority patent/US11521360B2/en
Priority to US18/052,313 priority patent/US11935204B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Definitions

  • the Interactive Augmented Reality (AR) Visual Aid invention described below is intended for users with visual impairments that impact field of vision (FOV). These may take the form of age-related macular degeneration, retinitis pigmentosa, diabetic retinopathy, Stargardt's disease, and other diseases where damage to part of the retina impairs vision.
  • FOV field of vision
  • the invention described is novel because it not only supplies algorithms to enhance vision, but also provides simple but powerful controls and a structured process that allows the user to adjust those algorithms.
  • the basic hardware is constructed from a non-invasive, wearable electronics-based AR eyeglass system (see FIGURE) employing any of a variety of integrated display technologies, including LCD, OLED, or direct retinal projection.
  • the AR system also contains an integrated processor and memory storage (either embedded in the glasses, or tethered by a cable) with embedded software implementing real-time algorithms that modify the images as they are captured by the camera(s). These modified, or corrected, images are then continuously presented to the eyes of the user via the integrated displays.
  • the basic image modification algorithms come in multiple forms as described later. In conjunction with the AR hardware glasses, they enable users to enhance vision in ways extending far beyond simple image changes such as magnification or contrast enhancement.
  • the fundamental invention is a series of adjustments that are applied to move, modify, or reshape the image in order to reconstruct it to suit each specific user's FOV and take full advantage of the remaining useful retinal area.
  • the following disclosure describes a variety of mapping, warping, distorting and scaling functions used to correct the image for the end user.
  • the invention places these fundamental algorithms under human control, allowing the user to interact directly with the corrected image and tailor its appearance for their particular condition or specific use case (see flowchart below).
  • an accurate map of the usable user FOV is a required starting point that must be known in order to provide a template for modifying the visible image.
  • a detailed starting point derived from FOV measurements does not have to be supplied.
  • an internal model of the FOV is developed, beginning with the display of a generic template or a shape that is believed to roughly match the type of visual impairment of the user.
  • This fixation training can be accomplished through gamification built into the software algorithms, and can be utilized periodically for increased fixation training and improved adaptation.
  • the gamification can be accomplished by following fixation targets around the display screen and in conjunction with a hand held pointer can select or click on the target during timed or untimed exercise.
  • this can be accomplished through voice active controls as a substitute or adjunct to a hand help pointer.
  • guide lines can be overlaid on reality or on the incoming image to help guide the users eye movements along the optimal path.
  • These guidelines can be a plurality of constructs such as, but not limited to, cross hair targets, bullseye targets or linear guidelines such as singular or parallel dotted lines of a fixed or variable distance apart, a dotted line or solid box of varying colors. This will enable the user to increase their training and adaptation for eye movement control to following the tracking lines or targets as their eyes move across a scene in the case of a landscape, picture or video monitor or across a page in the case of reading text.
  • Targeting approaches as described above can also be tied to head movement based on inertial sensor inputs or simply following along as the head moves. Furthermore, these guided fixation targets, or lines, can move across the screen at a predetermined fixed rate to encourage the user to follow along and keep pace. These same targets can also be scrolled across the screen at variable rates as determined or triggered by the user for customization to the situation or scene or text of interest.
  • Eccentric viewing training through reinforced learning can be encouraged by a series of exercises.
  • the targeting as described in fixation training can also be used for this training. With fixation targets on and the object, area, or word of interest can be incrementally tested by shifting locations to determine the best PRL for eccentric viewing.
  • pupil tracking algorithms can be employed and not only have eye tracking capability but can also utilize user customized offset for improved eccentric viewing capability.
  • eccentric viewing targets are offset guide the user to focus on their optimal area for eccentric viewing.
  • the user can be run through a series of practice modules whereby different distortion levels and methods are employed. With these different methods hybrid distortion training can be used to switch between areas of interest to improve fixation.
  • FIG. 1 is a grid manipulation flowchart, including hierarchical model construction.
  • the present inventors have discovered that low-vision users can conform a user-tuned software set and improve needed aspects of vision to enable functional vision to be restored.
  • the processes described above are implemented in a system configured to present an image to the user.
  • the processes may be implemented in software, such as machine readable code or machine executable code that is stored on a memory and executed by a processor.
  • Input signals or data is received by the unit from a user, cameras, detectors or any other device.
  • Output is presented to the user in any manner, including a screen display or headset display
  • Steps A-I are enhanced by the various interfaces and loops connecting AI interfaces with the instant system, as is known to those of skill in the art.
  • Step A involves identifying region(s) to remap from with source FOV; Step B initializing the same to achieve Step C wherein the model created is ratified.
  • AI Architecture 103 provides both resident and transient data sets to address the issue(s) being ameliorated in the user's vision. Said data sets reside in at least one of the sub-elements of the AI architecture, namely AI cloud 109 , AI compiler 111 , AI filter 107 and AI intervenor 105 , as known to those skilled in the art.
  • Step D wherein user selects point outputs
  • step E wherein user moves selected point(s) updating models in real-time
  • Step F wherein user releases selected point(s)
  • step G wherein interlocutory model is deemed complete, or H needing updates or I complete.
  • the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols.
  • the system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements.
  • the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead be performed by a single component.
  • the above-discussed embodiments of the invention may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable and/or computer-executable instructions, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the invention.
  • the computer readable media may be, for instance, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM) or flash memory, etc., or any transmitting/receiving medium such as the Internet or other communication network or link.
  • the article of manufacture containing the computer code may be made and/or used by executing the instructions directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network.
  • AI definitions should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
  • an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method.
  • the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.
  • each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical value, however, inherently contains certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
  • a computer system or machines of the invention include one or more processors (e.g., a central processing unit (CPU) a graphics processing unit (GPU) or both), a main memory and a static memory, which communicate with each other via a bus.
  • a processor may be provided by one or more processors including, for example, one or more of a single core or multi-core processor (e.g., AMD Phenom II X2, Intel Core Duo, AMD Phenom II X4, Intel Core i5, Intel Core I & Extreme Edition 980X, or Intel Xeon E7-2820).
  • a single core or multi-core processor e.g., AMD Phenom II X2, Intel Core Duo, AMD Phenom II X4, Intel Core i5, Intel Core I & Extreme Edition 980X, or Intel Xeon E7-2820.
  • An I/O mechanism may include a video display unit (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device (e.g., a keyboard), a cursor control device (e.g., a mouse), a disk drive unit, a signal generation device (e.g., a speaker), an accelerometer, a microphone, a cellular radio frequency antenna, and a network interface device (e.g., a network interface card (NIC), WiFI card, cellular modem, data jack, Ethemet port, modem jack, HDMI port, mini-HDMI port, USB port), touchscreen (e.g., CRT, LCD, LED, AMOLED, Super AMOLED), pointing device, trackpad, light (e.g., LED), light/image projection device, or a combination thereof.
  • a video display unit e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • Memory refers to a non-transitory memory which is provided by one or more tangible devices which preferably include one or more machine-readable medium on which is stored one or more sets of instructions (e.g., software) embodying any one or more of the methodologies or functions described herein.
  • the software may also reside, completely or at least partially, within the main memory, processor, or both during execution thereof by a computer within system, the main memory and the processor also constituting machine-readable media.
  • the software may further be transmitted or received over a network via the network interface device.
  • machine-readable medium can in an exemplary embodiment be a single medium
  • the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention.
  • Memory may be, for example, one or more of a hard disk drive, solid state drive (SSD), an optical disc, flash memory, zip disk, tape drive, “cloud” storage location, or a combination thereof.
  • a device of the invention includes a tangible, non-transitory computer readable medium for memory.
  • Exemplary devices for use as memory include semiconductor memory devices, (e.g., EPROM, EEPROM, solid state drive (SSD), and flash memory devices e.g., SD, micro SD, SDXC, SDIO, SDHC cards); magnetic disks, (e.g., internal hard disks or removable disks); and optical disks (e.g., CD and DVD disks).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Eye Examination Apparatus (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Interactive systems using adaptive control software and hardware from known and later developed eye-pieces to later developed head-wear to lenses, including implantable, temporarily insert-able and contact and related film based types of lenses including thin film transparent elements for housing cameras lenses and projector and functional equivalent processing tools. Simple controls, real-time updates and instant feedback allow implicit optimization of a universal model while managing complexity.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of and priority to U.S. Provisional Patent Applications Ser. Nos. 62/530,286 & 62/530,792 filed July 2017, the content of each of which is incorporated herein by reference herein in its entirely, along with full reservation of all Paris convention rights.
  • BACKGROUND OF THE DISCLOSURES
  • The Interactive Augmented Reality (AR) Visual Aid invention described below is intended for users with visual impairments that impact field of vision (FOV). These may take the form of age-related macular degeneration, retinitis pigmentosa, diabetic retinopathy, Stargardt's disease, and other diseases where damage to part of the retina impairs vision. The invention described is novel because it not only supplies algorithms to enhance vision, but also provides simple but powerful controls and a structured process that allows the user to adjust those algorithms.
  • The basic hardware is constructed from a non-invasive, wearable electronics-based AR eyeglass system (see FIGURE) employing any of a variety of integrated display technologies, including LCD, OLED, or direct retinal projection. One or more cameras, mounted on the glasses, continuously monitor the view where the glasses are pointing. The AR system also contains an integrated processor and memory storage (either embedded in the glasses, or tethered by a cable) with embedded software implementing real-time algorithms that modify the images as they are captured by the camera(s). These modified, or corrected, images are then continuously presented to the eyes of the user via the integrated displays.
  • The basic image modification algorithms come in multiple forms as described later. In conjunction with the AR hardware glasses, they enable users to enhance vision in ways extending far beyond simple image changes such as magnification or contrast enhancement. The fundamental invention is a series of adjustments that are applied to move, modify, or reshape the image in order to reconstruct it to suit each specific user's FOV and take full advantage of the remaining useful retinal area. The following disclosure describes a variety of mapping, warping, distorting and scaling functions used to correct the image for the end user.
  • The invention places these fundamental algorithms under human control, allowing the user to interact directly with the corrected image and tailor its appearance for their particular condition or specific use case (see flowchart below). In prior art, an accurate map of the usable user FOV is a required starting point that must be known in order to provide a template for modifying the visible image. With this disclosure, such a detailed starting point derived from FOV measurements does not have to be supplied. Instead, an internal model of the FOV is developed, beginning with the display of a generic template or a shape that is believed to roughly match the type of visual impairment of the user. From this simple starting point the user adjusts the shape and size of the displayed visual abnormality, using the simple control interface to add detail progressively, until the user can visually confirm that the displayed model captures the nuances of his or her personal visual field. Using this unique method, accurate FOV tests and initial templates are not required. Furthermore, the structured process, which incrementally increases model detail, makes the choice of initial model non-critical.
  • OBJECTS AND SUMMARY OF THE INVENTION
  • For people with retinal diseases, adapting to loss a vision becomes a way of life. This impact can affect their life in many ways including loss of the ability to read, loss of income, loss of mobility and an overall degraded quality of life. However, with prevalent retinal diseases such as AMD (Age related Macular Degeneration) not all of the vision is lost, and in this case the peripheral vision remains intact as only the central vision is Impacted with the degradation of the macula. Given that the peripheral vision remains intact it is possible to take advantage of eccentric viewing and through patient adaptation to increase functionality such as reading. Research has proven that through training of the eccentric viewing increased reading ability (both accuracy and speed). Eye movement control training and PRL (Preferred Retinal Locus) training were important to achieving these results1. Another factor in increasing reading ability with those with reduced vision is the ability to views words in context as opposed to isolation. Magnification is often used as a simply visual aid with some success. However, with increased magnification comes decreased FOV (Field of View) and therefore the lack of ability to see other words or objects around the word or object of interest. Although it was proven that with extensive training isolated word reading can improve, eye control was important to this as well2. The capability to guide the training for eccentric viewing and eye movement and fixation training is important to achieve the improvement in functionality such as reading. These approaches outlined below will serve to both describe novel ways to use augmented reality techniques to both automate and improve the training.
  • In order to help users with retinal diseases, especially users with central vision deficiencies. First it is important to train and help their ability to fixate on a target. Since central vision is normally used for this, this is an important step to help users control their ability to focus on a target. Thereby laying the ground work for more training and adaptation functionality. This fixation training can be accomplished through gamification built into the software algorithms, and can be utilized periodically for increased fixation training and improved adaptation. The gamification can be accomplished by following fixation targets around the display screen and in conjunction with a hand held pointer can select or click on the target during timed or untimed exercise. Furthermore, this can be accomplished through voice active controls as a substitute or adjunct to a hand help pointer.
  • To aid the user in targeting and fixation certain guide lines can be overlaid on reality or on the incoming image to help guide the users eye movements along the optimal path. These guidelines can be a plurality of constructs such as, but not limited to, cross hair targets, bullseye targets or linear guidelines such as singular or parallel dotted lines of a fixed or variable distance apart, a dotted line or solid box of varying colors. This will enable the user to increase their training and adaptation for eye movement control to following the tracking lines or targets as their eyes move across a scene in the case of a landscape, picture or video monitor or across a page in the case of reading text.
  • This approach can be further modified and improved with other interactive methods beyond simple eye movement. Targeting approaches as described above can also be tied to head movement based on inertial sensor inputs or simply following along as the head moves. Furthermore, these guided fixation targets, or lines, can move across the screen at a predetermined fixed rate to encourage the user to follow along and keep pace. These same targets can also be scrolled across the screen at variable rates as determined or triggered by the user for customization to the situation or scene or text of interest.
  • To make the most of a user's remaining useful vision methods for adaptive peripheral vision training can be employed. Training and encouraging the user to make the most of their eccentric viewing capabilities is important. As described the user may naturally gravitate to their PRL (preferred retinal locus) to help optimized their eccentric viewing. However, this may not be the optimal location to maximize their ability to view images or text with their peripheral vision. Through use of skewing and warping the images presented to the user, along with the targeting guidelines it can be determined where the optimal place for the user to target their eccentric vision.
  • Eccentric viewing training through reinforced learning can be encouraged by a series of exercises. The targeting as described in fixation training can also be used for this training. With fixation targets on and the object, area, or word of interest can be incrementally tested by shifting locations to determine the best PRL for eccentric viewing.
  • Also, pupil tracking algorithms can be employed and not only have eye tracking capability but can also utilize user customized offset for improved eccentric viewing capability. Whereby the eccentric viewing targets are offset guide the user to focus on their optimal area for eccentric viewing.
  • Further improvements in visual adaptation can be achieved through use of the hybrid distortion algorithms. With the layered distortion approach objects or words on the outskirts of the image can receive a different distortion and provide a look ahead preview to piece together words for increased reading speed. While the user is focused on the area of interest that is being manipulated the words that are moving into the focus area can help to provide context in order to interpolate and better understand what is coming for faster comprehension and contextual understanding.
  • Furthermore, the user can be run through a series of practice modules whereby different distortion levels and methods are employed. With these different methods hybrid distortion training can be used to switch between areas of interest to improve fixation.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various preferred embodiments are described herein with references to the drawings in which merely illustrative views are offered for consideration, whereby:
  • FIG. 1 is a grid manipulation flowchart, including hierarchical model construction.
  • Corresponding reference characters indicate corresponding components are not needed throughout the single view of the drawing. Skilled artisans will appreciate that elements in the figures are Illustrated for simplicity and clarity, and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention.
  • DETAILED DESCRIPTIONS
  • The present inventors have discovered that low-vision users can conform a user-tuned software set and improve needed aspects of vision to enable functional vision to be restored.
  • Expressly incorporated by reference as if fully set forth herein are the following: U.S. Provisional Patent Application No. 62/530,286 filed Jul. 9, 2017, U.S. Provisional Patent Application No. 62/530,792 filed Jul. 9, 2017, U.S. Provisional Patent Application No. 62/579,657, filed Oct. 13, 2017, U.S. Provisional Patent Application No. 62/579,798, filed Oct. 13, 2017, Patent Cooperation Treaty Patent Application No. PCT/US17/62421, filed Nov. 17, 2017, U.S. NonProvisional patent application Ser. No. 15/817,117, filed Nov. 17, 2017, U.S. Provisional Patent Application No. 62/639,347, filed Mar. 6, 2018, U.S. NonProvisional patent application Ser. No. 15/918,884, filed Mar. 12, 2018, and U.S. Provisional Patent Application No. 62/677,463, filed May 29, 2018.
  • It is contemplated that the processes described above are implemented in a system configured to present an image to the user. The processes may be implemented in software, such as machine readable code or machine executable code that is stored on a memory and executed by a processor. Input signals or data is received by the unit from a user, cameras, detectors or any other device. Output is presented to the user in any manner, including a screen display or headset display
  • In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Furthermore, other steps may be provided or steps may be eliminated from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.
  • Referring now to FIG. 1, systems of the present invention are shown schematically. Steps A-I are enhanced by the various interfaces and loops connecting AI interfaces with the instant system, as is known to those of skill in the art.
  • AI Data 101, resides both in its own database and in an AI cloud 109, along with AI Compiler 111, and AI filter 107 along with any other required AI architecture 103 and AI Intervenor 105. Step A involves identifying region(s) to remap from with source FOV; Step B initializing the same to achieve Step C wherein the model created is ratified.
  • AI Architecture 103 provides both resident and transient data sets to address the issue(s) being ameliorated in the user's vision. Said data sets reside in at least one of the sub-elements of the AI architecture, namely AI cloud 109, AI compiler 111, AI filter 107 and AI intervenor 105, as known to those skilled in the art. Likewise, Step D wherein user selects point outputs, and step E wherein user moves selected point(s) updating models in real-time, and Step F, wherein user releases selected point(s), along with step G wherein interlocutory model is deemed complete, or H needing updates or I complete. Those skilled in the art understand the multi-path approach and orientation to use AI elements to create functional and important models using said data, inter alia.
  • It will be appreciated that the above embodiments that have been described in particular detail are merely example or possible embodiments, and that there are many other combinations, additions, or alternatives that may be included. For example, while online gaming has been referred to throughout, other applications of the above embodiments include online or web-based applications or other cloud services.
  • Also, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Further, the system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead be performed by a single component.
  • Some portions of the above description present features in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations may be used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or by functional names, without loss of generality.
  • Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or calculating” or “determining” or “identifying” or “displaying” or “providing” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • Based on the foregoing specification, the above-discussed embodiments of the invention may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable and/or computer-executable instructions, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the invention. The computer readable media may be, for instance, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM) or flash memory, etc., or any transmitting/receiving medium such as the Internet or other communication network or link. The article of manufacture containing the computer code may be made and/or used by executing the instructions directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network.
  • While the disclosure has been described in terms of various specific embodiments, it will be recognized that the disclosure can be practiced with modification within the spirit and scope of the claims.
  • While several embodiments of the present disclosure have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the functions and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the present disclosure. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings of the present disclosure is/are used.
  • Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments of the disclosure described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, the disclosure may be practiced otherwise than as specifically described and claimed. The present disclosure is directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
  • AI definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
  • The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified, unless clearly indicated to the contrary.
  • Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents.
  • Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language mans that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar throughout this specification may, but do not necessarily, all refer to the same embodiment.
  • Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
  • The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labeled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown. Unless otherwise indicated, all numbers expressing quantities of ingredients, properties such as molecular weight, reaction conditions, and so forth used in the specification and claims are to be understood as being modified in all instances by the term “about.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the specification and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by the present invention. At the very least, and not as an attempt to limit the application of the doctrine of equivalents to the scope of the claims, each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical value, however, inherently contains certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
  • The terms “a,” “an,” “the” and similar referents used in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
  • Groupings of alternative elements or embodiments of the invention disclosed herein are not to be construed as limitations. Each group member may be referred to and claimed individually or in any combination with other members of the group or other elements found herein. It is anticipated that one or more members of a group may be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims.
  • Certain embodiments of this invention are described herein, including the best mode known to the inventors for carrying out the invention. Of course, variations on these described embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventor expects skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.
  • Specific embodiments disclosed herein may be further limited in the claims using consisting of or consisting essentially of language. When used in the claims, whether as filed or added per amendment, the transition term “consisting of” excludes any element, step, or ingredient not specified in the claims. The transition term “consisting essentially of” limits the scope of a claim to the specified materials or steps and those that do not materially affect the basic and novel characteristic(s). Embodiments of the invention so claimed are inherently or expressly described and enabled herein.
  • As one skilled in the art would recognize as necessary or best-suited for performance of the methods of the invention, a computer system or machines of the invention include one or more processors (e.g., a central processing unit (CPU) a graphics processing unit (GPU) or both), a main memory and a static memory, which communicate with each other via a bus.
  • A processor may be provided by one or more processors including, for example, one or more of a single core or multi-core processor (e.g., AMD Phenom II X2, Intel Core Duo, AMD Phenom II X4, Intel Core i5, Intel Core I & Extreme Edition 980X, or Intel Xeon E7-2820).
  • An I/O mechanism may include a video display unit (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device (e.g., a keyboard), a cursor control device (e.g., a mouse), a disk drive unit, a signal generation device (e.g., a speaker), an accelerometer, a microphone, a cellular radio frequency antenna, and a network interface device (e.g., a network interface card (NIC), WiFI card, cellular modem, data jack, Ethemet port, modem jack, HDMI port, mini-HDMI port, USB port), touchscreen (e.g., CRT, LCD, LED, AMOLED, Super AMOLED), pointing device, trackpad, light (e.g., LED), light/image projection device, or a combination thereof.
  • Memory according to the invention refers to a non-transitory memory which is provided by one or more tangible devices which preferably include one or more machine-readable medium on which is stored one or more sets of instructions (e.g., software) embodying any one or more of the methodologies or functions described herein. The software may also reside, completely or at least partially, within the main memory, processor, or both during execution thereof by a computer within system, the main memory and the processor also constituting machine-readable media. The software may further be transmitted or received over a network via the network interface device.
  • While the machine-readable medium can in an exemplary embodiment be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. Memory may be, for example, one or more of a hard disk drive, solid state drive (SSD), an optical disc, flash memory, zip disk, tape drive, “cloud” storage location, or a combination thereof. In certain embodiments, a device of the invention includes a tangible, non-transitory computer readable medium for memory. Exemplary devices for use as memory include semiconductor memory devices, (e.g., EPROM, EEPROM, solid state drive (SSD), and flash memory devices e.g., SD, micro SD, SDXC, SDIO, SDHC cards); magnetic disks, (e.g., internal hard disks or removable disks); and optical disks (e.g., CD and DVD disks).
  • Furthermore, numerous references have been made to patents and printed publications throughout this specification. Each of the above-cited references and printed publications are individually incorporated herein by reference in their entirety.
  • In closing, it is to be understood that the embodiments of the invention disclosed herein are illustrative of the principles of the present invention. Other modifications that may be employed are within the scope of the invention. Thus, by way of example, but not of limitation, alternative configurations of the present invention may be utilized in accordance with the teachings herein. Accordingly, the present invention is not limited to that precisely as shown and described.

Claims (18)

What is claimed is:
1. An adaptive control driven system for visual enhancement and correction useful for addressing ocular disease states, which further comprises, in combination;
Adaptive peripheral vision training;
Eccentric viewing Training;
Pupil tracking with customizable offset for eccentric viewing;
Gamification—follow fixation targets around screen for training;
Targeting lines overlaid on reality for fixation;
Guided fixation across page or landscape w/head tracking;
Guided fixation with words moving across screen at fixed rates;
Guided fixation with words moving at variable rates triggered by user;
Guided Training & controlling eye movements with tracking lines;
Look ahead preview to piece together words for increased reading speed;
Distortion training to improve fixation.
2. In a system for using AR/VR to address visual issues in users, improvements which comprise the following improvements in HYBRID WARPING & LAYERING:
Layered processing—
All processing Implemented as a serial pipeline of independent processing stages, directly connected only by one-way data flow;
General model both for transforming data and adding to it (e.g. rendering independent Information into the image);
Internally, a pipeline stage can be arbitrarily complex (e.g. effectively combine multiple stages into one monolithic stage for performance reasons);
Can reconfigure pipeline connections on-the-fly to enable or disable features;
Tiered Radial Warp—
Divide output display into central, inner, transition, and outer regions with different radial mapping characteristics in each region;
Only three primary parameters adjust;
Easy to understand and manipulate;
Capable of accommodating varied situations/tasks with different FOV requirements;
Two auxiliary parameters customized per user but infrequently changed;
Simple and efficient processing;
Reference lines: highly visible, guide to optimal location, show distortion contours even as warping changes, speed up scanning to start of lines in spite of distortion;
Guide the eye to optimal location for reading or high-acuity task;
Show distortion contours as they bend away into peripheral vision;
Speed up scanning to start of next line in spite of distortion;
Dual reference lines—bracket text to show further distortion details;
Fiducial
Change traditional crosshairs to oriented-T (typically inverted-T) to provide intersection point for fixation without drawing over reading/high-acuity areas;
Different color from reference line to avoid confusion and distraction;
Control
Numerous interfaces provided for parameter adjustment;
Freely change any parameter at any time;
Save or restore entire “hotkey” configurations:
Automatic mode to adjust warp parameters and/or select configuration based on camera view (not described in detail here); and
Smooth and continuous transit.
3. An adaptive control driven system for visual enhancement and correction useful for addressing ocular disease states, which comprises, in combination;
Software using at least one feature programmed to simulate improved functional vision for a user from a matrix selected from the group consisting of
Hybrid magnification & warping;
FOV dependent on head tracking;
Word shifting with “target lines”;
Central radial warping;
Interactive on the fly FOV mapping;
Dynamic Zoon;
OCR & Font change adaptation;
Distortion Grid adjustment;
Scotoma interactive adjustment; and,
Adaptive peripheral vision training.
4. An adaptive control driven system for visual enhancement and correction useful for addressing ocular disease states, comprising hardware which further comprises, at least the following features and their functional equivalents:
At least a machine or manufacture of matter in the state of the art effective for managing;
One button wireless update;
Stabilization & targeting training;
Targeting lines & crosshairs for eye fixation & tracking;
Interactive voice recognition and control;
Reading & text recognition mode;
Voice memo; and
Mode shift transitions.
5. An adaptive control driven system for visual enhancement and correction useful for addressing ocular disease states, which comprises in combination:
At least a set of hardware capable of implementing user-driven adjustments, driven by any subject software described herein to effectively manage;
Hybrid magnification & warping;
FOV dependent on head tracking;
Word shifting with “target lines”;
Central radial warping;
Interactive on the fly FOV mapping;
Dynamic Zoom;
OCR & Font change adaptation;
Distortion Grid adjustment;
Scotoma interactive adjustment;
Adaptive peripheral vision training;
In combination in whole or in part with:
One button wireless update;
Stabilization & targeting training;
Training lines & crosshairs for eye fixation & tracking;
Interactive voice recognition mode;
Voice memo; and
Mode shift transitions.
6. The System of claim 1, further comprising data collected and arrayed by AI.
7. The System of claim 2, further comprising data collected and arrayed by AI.
8. The System of claim 3, further comprising data collected and arrayed by AI.
9. The System of claim 4, further comprising data collected and arrayed by AI.
10. The System of claim 5, further comprising data collected and arrayed by AI.
11. An AI enhanced system, further comprising at least an AI Data 101, residing both in its own database and in an AI cloud 109, along with AI Compiler 111, and AI filter 107 and with any other required AI architecture 103 and AI Intervenor 105.
12. The system of claim 11, wherein user data is supplemented by key medical information arrayed within an AI architecture, further comprising both resident and transient data sets operatively linked thereto.
13. The system of claim 12, wherein said resident and transient data sets are in communication with at least the AI cloud, and the AI Intervenor analyzes and makes available select data, through and in connection with AI filter(s) to provide models which better support user's needs.
14. The system of claim 13, whereby AI Data is linked to and operatively connected with AI Architecture and any required interfaces with Adaptive Control visual system(s).
15. The system of claim 14, whereby user data is protected and subject to AI Filter before becoming part of larger data super-sets.
16. The system of claim 15, wherein settings control which portions of user data are arrayed in specific cloud locations, the AI Compiler, and other aspects of said AI Architecture.
17. The system of claim 16, whereby key ophthalmic, optical and other medical data are sequestered.
18. The system of claim 17, whereby said sequestration is concomitantly managed with other hypothecation of select aspects of the AI Data.
US16/030,788 2017-07-09 2018-07-09 Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids Abandoned US20190012841A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/030,788 US20190012841A1 (en) 2017-07-09 2018-07-09 Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids
US16/727,564 US11043036B2 (en) 2017-07-09 2019-12-26 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US17/354,830 US11521360B2 (en) 2017-07-09 2021-06-22 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US18/052,313 US11935204B2 (en) 2017-07-09 2022-11-03 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762530286P 2017-07-09 2017-07-09
US201762530792P 2017-07-10 2017-07-10
US16/030,788 US20190012841A1 (en) 2017-07-09 2018-07-09 Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/727,564 Continuation US11043036B2 (en) 2017-07-09 2019-12-26 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids

Publications (1)

Publication Number Publication Date
US20190012841A1 true US20190012841A1 (en) 2019-01-10

Family

ID=64902844

Family Applications (4)

Application Number Title Priority Date Filing Date
US16/030,788 Abandoned US20190012841A1 (en) 2017-07-09 2018-07-09 Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids
US16/727,564 Active US11043036B2 (en) 2017-07-09 2019-12-26 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US17/354,830 Active US11521360B2 (en) 2017-07-09 2021-06-22 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US18/052,313 Active US11935204B2 (en) 2017-07-09 2022-11-03 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids

Family Applications After (3)

Application Number Title Priority Date Filing Date
US16/727,564 Active US11043036B2 (en) 2017-07-09 2019-12-26 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US17/354,830 Active US11521360B2 (en) 2017-07-09 2021-06-22 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US18/052,313 Active US11935204B2 (en) 2017-07-09 2022-11-03 Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids

Country Status (1)

Country Link
US (4) US20190012841A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180144554A1 (en) * 2016-11-18 2018-05-24 Eyedaptic, LLC Systems for augmented reality visual aids and tools
CN111026276A (en) * 2019-12-12 2020-04-17 Oppo(重庆)智能科技有限公司 Visual aid method and related product
CN111857910A (en) * 2020-06-28 2020-10-30 维沃移动通信有限公司 Information display method, device and electronic device
CN111973889A (en) * 2020-07-24 2020-11-24 光朗(海南)生物科技有限责任公司 VR equipment with fused optical image distance and screen display content
US10984508B2 (en) 2017-10-31 2021-04-20 Eyedaptic, Inc. Demonstration devices and methods for enhancement for low vision users and systems improvements
US11043036B2 (en) 2017-07-09 2021-06-22 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US20210319098A1 (en) * 2018-12-31 2021-10-14 Intel Corporation Securing systems employing artificial intelligence
US11187906B2 (en) 2018-05-29 2021-11-30 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US20220149009A1 (en) * 2019-04-25 2022-05-12 Showa Denko Materials Co., Ltd. Method for manufacturing semiconductor device having dolmen structure, method for manufacturing support piece, and laminated film
CN115562490A (en) * 2022-10-12 2023-01-03 西北工业大学太仓长三角研究院 Cross-screen eye movement interaction method and system for aircraft cockpit based on deep learning
US11563885B2 (en) 2018-03-06 2023-01-24 Eyedaptic, Inc. Adaptive system for autonomous machine learning and control in wearable augmented reality and virtual reality visual aids
US11726561B2 (en) 2018-09-24 2023-08-15 Eyedaptic, Inc. Enhanced autonomous hands-free control in electronic visual aids
US12416062B2 (en) 2018-09-24 2025-09-16 Eyedaptic, Inc. Enhanced autonomous hands-free control in electronic visual aids

Family Cites Families (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5546099A (en) * 1993-08-02 1996-08-13 Virtual Vision Head mounted display system with light blocking structure
US5777715A (en) 1997-01-21 1998-07-07 Allen Vision Systems, Inc. Low vision rehabilitation system
US6418122B1 (en) 1997-03-21 2002-07-09 Scientific-Atlanta, Inc. Method and apparatus for assuring sufficient bandwidth of a statistical multiplexer
US5892570A (en) 1997-11-24 1999-04-06 State Of Oregon Acting By And Through The State Board Of Higher Education On Behalf Of The University Of Oregon Method and apparatus for measuring and correcting metamorphopsia
JP4018677B2 (en) * 2004-08-10 2007-12-05 スカラ株式会社 Image display device
US8668334B2 (en) 2006-02-27 2014-03-11 Vital Art And Science Incorporated Vision measurement and training system and method of operation thereof
WO2008005848A2 (en) 2006-06-30 2008-01-10 Novavision, Inc. Diagnostic and therapeutic system for eccentric viewing
EP2143273A4 (en) 2007-04-02 2012-08-08 Esight Corp APPARATUS AND METHOD FOR INCREASING VISION
US20080309878A1 (en) 2007-06-13 2008-12-18 Rahim Hirji Near eye opthalmic device
US8066376B2 (en) 2008-05-01 2011-11-29 Vital Art & Science Incorporated Dynamic shape discrimination vision test
US8821350B2 (en) 2009-07-02 2014-09-02 Richard J. Maertz Exercise and communications system and associated methods
US7926943B1 (en) 2009-11-10 2011-04-19 Nike, Inc. Peripheral vision training and/or testing during central vision fixation
EP2502410B1 (en) 2009-11-19 2019-05-01 eSight Corporation A method for augmenting sight
JP2011172216A (en) 2010-01-25 2011-09-01 Panasonic Corp Reproducing apparatus
US8964298B2 (en) 2010-02-28 2015-02-24 Microsoft Corporation Video display modification based on sensor input for a see-through near-to-eye display
WO2011106797A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
WO2011149785A1 (en) 2010-05-23 2011-12-01 The Regents Of The University Of California Characterization and correction of macular distortion
US8941559B2 (en) 2010-09-21 2015-01-27 Microsoft Corporation Opacity filter for display device
US9122053B2 (en) 2010-10-15 2015-09-01 Microsoft Technology Licensing, Llc Realistic occlusion for a head mounted augmented reality display
US8976086B2 (en) 2010-12-03 2015-03-10 Esight Corp. Apparatus and method for a bioptic real time video system
US20160187654A1 (en) 2011-02-28 2016-06-30 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a light transmissive wedge shaped illumination system
US8743244B2 (en) 2011-03-21 2014-06-03 HJ Laboratories, LLC Providing augmented reality based on third party information
US8376849B2 (en) 2011-06-03 2013-02-19 Nintendo Co., Ltd. Apparatus and method for controlling objects on a stereoscopic display
AU2011204946C1 (en) 2011-07-22 2012-07-26 Microsoft Technology Licensing, Llc Automatic text scrolling on a head-mounted display
CN103946732B (en) 2011-09-26 2019-06-14 微软技术许可有限责任公司 Video based on the sensor input to perspective, near-eye display shows modification
US10571715B2 (en) 2011-11-04 2020-02-25 Massachusetts Eye And Ear Infirmary Adaptive visual assistive device
US8384999B1 (en) 2012-01-09 2013-02-26 Cerr Limited Optical modules
US20130215147A1 (en) 2012-02-17 2013-08-22 Esight Corp. Apparatus and Method for Enhancing Human Visual Performance in a Head Worn Video System
CA2875261C (en) 2012-06-01 2019-05-21 Esight Corp. Apparatus and method for a bioptic real time video system
CA2820241C (en) 2012-06-13 2020-01-14 Robert G. Hilkes An apparatus and method for enhancing human visual performance in a head worn video system
KR101502782B1 (en) * 2012-06-27 2015-03-16 삼성전자 주식회사 Image distortion compensation apparatus, medical image apparatus having the same and method for compensating image distortion
US10223831B2 (en) 2012-08-30 2019-03-05 Atheer, Inc. Method and apparatus for selectively presenting content
US20140152530A1 (en) 2012-12-03 2014-06-05 Honeywell International Inc. Multimedia near to eye display system
US20150355481A1 (en) 2012-12-31 2015-12-10 Esight Corp. Apparatus and method for fitting head mounted vision augmentation systems
US9076257B2 (en) 2013-01-03 2015-07-07 Qualcomm Incorporated Rendering augmented reality based on foreground object
US9180053B2 (en) 2013-01-29 2015-11-10 Xerox Corporation Central vision impairment compensation
WO2014155288A2 (en) * 2013-03-25 2014-10-02 Ecole Polytechnique Federale De Lausanne (Epfl) Method and apparatus for head worn display with multiple exit pupils
EP2979446A1 (en) 2013-03-26 2016-02-03 Seiko Epson Corporation Head-mounted display device, control method of head-mounted display device, and display system
WO2015030099A1 (en) * 2013-08-30 2015-03-05 ブラザー工業株式会社 Image display device, and head-mounted display
WO2015032828A1 (en) 2013-09-04 2015-03-12 Essilor International (Compagnie Generale D'optique) Methods and systems for augmented reality
FR3012627B1 (en) 2013-10-25 2018-06-15 Essilor International DEVICE AND METHOD FOR POSTURE CORRECTION
US10437060B2 (en) * 2014-01-20 2019-10-08 Sony Corporation Image display device and image display method, image output device and image output method, and image display system
US9836122B2 (en) 2014-01-21 2017-12-05 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
CA2939928C (en) 2014-02-19 2021-06-22 Evergaze, Inc. Apparatus and method for improving, augmenting or enhancing vision
US20170084203A1 (en) * 2014-03-07 2017-03-23 D.R.I. Systems LLC Stereo 3d head mounted display applied as a low vision aid
WO2016018487A2 (en) 2014-05-09 2016-02-04 Eyefluene, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US10564714B2 (en) 2014-05-09 2020-02-18 Google Llc Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
US9635222B2 (en) 2014-08-03 2017-04-25 PogoTec, Inc. Wearable camera systems and apparatus for aligning an eyewear camera
JP2016036390A (en) * 2014-08-05 2016-03-22 富士通株式会社 Information processing unit, focal point detection method and focal point detection program
WO2016036860A1 (en) 2014-09-02 2016-03-10 Baylor College Of Medicine Altered vision via streamed optical remapping
CN104306102B (en) 2014-10-10 2017-10-24 上海交通大学 For the wear-type vision-aided system of dysopia patient
US20220171456A1 (en) 2014-11-10 2022-06-02 Irisvision, Inc. Method and System for Remote Clinician Management of Head-Mounted Vision Assist Devices
US11546527B2 (en) 2018-07-05 2023-01-03 Irisvision, Inc. Methods and apparatuses for compensating for retinitis pigmentosa
CA2971163A1 (en) 2014-11-10 2016-05-19 Visionize Corp. Methods and apparatus for vision enhancement
US11372479B2 (en) 2014-11-10 2022-06-28 Irisvision, Inc. Multi-modal vision enhancement system
US9791924B2 (en) * 2014-12-23 2017-10-17 Mediatek Inc. Eye tracking with mobile device in a head-mounted display
US10073516B2 (en) 2014-12-29 2018-09-11 Sony Interactive Entertainment Inc. Methods and systems for user interaction within virtual reality scene using head mounted display
JP6339239B2 (en) * 2015-01-15 2018-06-06 株式会社ソニー・インタラクティブエンタテインメント Head-mounted display device and video display system
US20160235291A1 (en) 2015-02-13 2016-08-18 Dennis Choohan Goh Systems and Methods for Mapping and Evaluating Visual Distortions
US9989764B2 (en) * 2015-02-17 2018-06-05 Thalmic Labs Inc. Systems, devices, and methods for eyebox expansion in wearable heads-up displays
US20160264051A1 (en) 2015-03-12 2016-09-15 Visionize Corp. Night Driving System and Method
NZ773836A (en) 2015-03-16 2022-07-01 Magic Leap Inc Methods and systems for diagnosing and treating health ailments
US11819273B2 (en) 2015-03-17 2023-11-21 Raytrx, Llc Augmented and extended reality glasses for use in surgery visualization and telesurgery
US12237074B2 (en) 2015-03-17 2025-02-25 Raytrx, Llc System, method, and non-transitory computer-readable storage media related to correction of vision defects using a visual display
US11016302B2 (en) 2015-03-17 2021-05-25 Raytrx, Llc Wearable image manipulation and control system with high resolution micro-displays and dynamic opacity augmentation in augmented reality glasses
US11956414B2 (en) 2015-03-17 2024-04-09 Raytrx, Llc Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing
US11461936B2 (en) 2015-03-17 2022-10-04 Raytrx, Llc Wearable image manipulation and control system with micro-displays and augmentation of vision and sensing in augmented reality glasses
US12094595B2 (en) 2015-03-17 2024-09-17 Raytrx, Llc AR/XR headset for military medical telemedicine and target acquisition
WO2016149536A1 (en) 2015-03-17 2016-09-22 Ocutrx Vision Technologies, Llc. Correction of vision defects using a visual display
EP3286599A4 (en) 2015-04-22 2018-12-19 eSIGHT CORP. Methods and devices for optical aberration correction
JP2016212177A (en) 2015-05-01 2016-12-15 セイコーエプソン株式会社 Transmission type display device
US20180103917A1 (en) 2015-05-08 2018-04-19 Ngoggle Head-mounted display eeg device
US20180104106A1 (en) * 2015-05-12 2018-04-19 Agency For Science, Technology And Research A system and method for displaying a video image
US20160349509A1 (en) 2015-05-26 2016-12-01 Microsoft Technology Licensing, Llc Mixed-reality headset
WO2016204433A1 (en) * 2015-06-15 2016-12-22 Samsung Electronics Co., Ltd. Head mounted display apparatus
KR20160147636A (en) * 2015-06-15 2016-12-23 삼성전자주식회사 Head Mounted Display Apparatus
CN108475001B (en) 2015-06-18 2021-06-11 爱丽丝视觉全球公司 Adapter for retinal imaging using a handheld computer
WO2017004695A1 (en) 2015-07-06 2017-01-12 Frank Jones Methods and devices for demountable head mounted displays
CA164180S (en) 2015-09-08 2016-09-02 Esight Corp Combined vision apparatus comprising eyewear frame and demountable display
WO2017059522A1 (en) 2015-10-05 2017-04-13 Esight Corp. Methods for near-to-eye displays exploiting optical focus and depth information extraction
US20170185723A1 (en) 2015-12-28 2017-06-29 Integer Health Technologies, LLC Machine Learning System for Creating and Utilizing an Assessment Metric Based on Outcomes
JP6798106B2 (en) 2015-12-28 2020-12-09 ソニー株式会社 Information processing equipment, information processing methods, and programs
CA3193007A1 (en) 2016-01-12 2017-07-20 Esight Corp. Language element vision augmentation methods and devices
US10229541B2 (en) 2016-01-28 2019-03-12 Sony Interactive Entertainment America Llc Methods and systems for navigation within virtual reality space using head mounted display
US10667981B2 (en) * 2016-02-29 2020-06-02 Mentor Acquisition One, Llc Reading assistance system for visually impaired
US10568502B2 (en) * 2016-03-23 2020-02-25 The Chinese University Of Hong Kong Visual disability detection system using virtual reality
US10643390B2 (en) 2016-03-30 2020-05-05 Seiko Epson Corporation Head mounted display, method for controlling head mounted display, and computer program
US20170343810A1 (en) * 2016-05-24 2017-11-30 Osterhout Group, Inc. Pre-assembled solid optical assembly for head worn computers
CN105930819B (en) 2016-05-06 2019-04-12 西安交通大学 Real-time city traffic lamp identifying system based on monocular vision and GPS integrated navigation system
US10091414B2 (en) 2016-06-24 2018-10-02 International Business Machines Corporation Methods and systems to obtain desired self-pictures with an image capture device
WO2018008232A1 (en) 2016-07-04 2018-01-11 ソニー株式会社 Information processing device, information processing method, and program
US10108013B2 (en) * 2016-08-22 2018-10-23 Microsoft Technology Licensing, Llc Indirect-view augmented reality display system
KR102719606B1 (en) 2016-09-09 2024-10-21 삼성전자주식회사 Method, storage medium and electronic device for displaying images
US10950049B1 (en) 2016-09-30 2021-03-16 Amazon Technologies, Inc. Augmenting transmitted video data
AU2017362507A1 (en) 2016-11-18 2018-11-22 Eyedaptic, Inc. Improved systems for augmented reality visual aids and tools
US20180144554A1 (en) 2016-11-18 2018-05-24 Eyedaptic, LLC Systems for augmented reality visual aids and tools
US10869026B2 (en) 2016-11-18 2020-12-15 Amitabha Gupta Apparatus for augmenting vision
WO2018097632A1 (en) 2016-11-25 2018-05-31 Samsung Electronics Co., Ltd. Method and device for providing an image
US20180203231A1 (en) * 2017-01-13 2018-07-19 Microsoft Technology Licensing, Llc Lenslet near-eye display device
US20180365877A1 (en) 2017-03-12 2018-12-20 Eyedaptic, LLC Systems for adaptive control driven ar/vr visual aids
WO2018182734A1 (en) 2017-03-31 2018-10-04 Greget Mark System for using augmented reality for vision
WO2018200717A1 (en) 2017-04-25 2018-11-01 Raytrx, Llc Wearable image manipulation and control system with correction for vision defects and augmentation of vision and sensing
CN107121070B (en) 2017-06-30 2023-03-21 易视智瞳科技(深圳)有限公司 Calibration system and calibration method for rubber valve needle head
US20190012841A1 (en) 2017-07-09 2019-01-10 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids
US10969584B2 (en) 2017-08-04 2021-04-06 Mentor Acquisition One, Llc Image expansion optic for head-worn computer
US10984508B2 (en) 2017-10-31 2021-04-20 Eyedaptic, Inc. Demonstration devices and methods for enhancement for low vision users and systems improvements
CA3084546C (en) 2017-12-03 2023-01-31 Frank Jones Enhancing the performance of near-to-eye vision systems
CN111417883B (en) 2017-12-03 2022-06-17 鲁姆斯有限公司 Optical equipment alignment method
CN112534467A (en) 2018-02-13 2021-03-19 弗兰克.沃布林 Method and apparatus for contrast sensitivity compensation
US11563885B2 (en) 2018-03-06 2023-01-24 Eyedaptic, Inc. Adaptive system for autonomous machine learning and control in wearable augmented reality and virtual reality visual aids
US11145096B2 (en) 2018-03-07 2021-10-12 Samsung Electronics Co., Ltd. System and method for augmented reality interaction
US11187906B2 (en) 2018-05-29 2021-11-30 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
WO2020014705A1 (en) 2018-07-13 2020-01-16 Raytrx, Llc Wearable image manipulation and control system with micro-displays and augmentation of vision and sensing in augmented reality glasses
WO2020014707A1 (en) 2018-07-13 2020-01-16 Raytrx, Llc Wearable image manipulation and control system with high resolution micro-displays and dynamic opacity augmentation in augmented reality glasses
KR20210058964A (en) 2018-09-24 2021-05-24 아이답틱 인코포레이티드 Improved autonomous hands-free control in electronic visual aids
CN215313596U (en) 2021-08-13 2021-12-28 易视智瞳科技(深圳)有限公司 Dispensing valve nozzle cleaning device and dispensing equipment
CN215313595U (en) 2021-08-13 2021-12-28 易视智瞳科技(深圳)有限公司 Automatic calibration device for dispensing valve and dispensing equipment

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11282284B2 (en) 2016-11-18 2022-03-22 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US12033291B2 (en) 2016-11-18 2024-07-09 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US11676352B2 (en) 2016-11-18 2023-06-13 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US20180144554A1 (en) * 2016-11-18 2018-05-24 Eyedaptic, LLC Systems for augmented reality visual aids and tools
US10872472B2 (en) 2016-11-18 2020-12-22 Eyedaptic, Inc. Systems for augmented reality visual aids and tools
US11521360B2 (en) 2017-07-09 2022-12-06 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US11935204B2 (en) 2017-07-09 2024-03-19 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US11043036B2 (en) 2017-07-09 2021-06-22 Eyedaptic, Inc. Artificial intelligence enhanced system for adaptive control driven AR/VR visual aids
US10984508B2 (en) 2017-10-31 2021-04-20 Eyedaptic, Inc. Demonstration devices and methods for enhancement for low vision users and systems improvements
US11756168B2 (en) 2017-10-31 2023-09-12 Eyedaptic, Inc. Demonstration devices and methods for enhancement for low vision users and systems improvements
US11563885B2 (en) 2018-03-06 2023-01-24 Eyedaptic, Inc. Adaptive system for autonomous machine learning and control in wearable augmented reality and virtual reality visual aids
US12132984B2 (en) 2018-03-06 2024-10-29 Eyedaptic, Inc. Adaptive system for autonomous machine learning and control in wearable augmented reality and virtual reality visual aids
US11803061B2 (en) 2018-05-29 2023-10-31 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US11385468B2 (en) 2018-05-29 2022-07-12 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US11187906B2 (en) 2018-05-29 2021-11-30 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US12282169B2 (en) 2018-05-29 2025-04-22 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US11726561B2 (en) 2018-09-24 2023-08-15 Eyedaptic, Inc. Enhanced autonomous hands-free control in electronic visual aids
US12416062B2 (en) 2018-09-24 2025-09-16 Eyedaptic, Inc. Enhanced autonomous hands-free control in electronic visual aids
US20210319098A1 (en) * 2018-12-31 2021-10-14 Intel Corporation Securing systems employing artificial intelligence
US12346432B2 (en) * 2018-12-31 2025-07-01 Intel Corporation Securing systems employing artificial intelligence
US20220149009A1 (en) * 2019-04-25 2022-05-12 Showa Denko Materials Co., Ltd. Method for manufacturing semiconductor device having dolmen structure, method for manufacturing support piece, and laminated film
CN111026276A (en) * 2019-12-12 2020-04-17 Oppo(重庆)智能科技有限公司 Visual aid method and related product
CN111857910A (en) * 2020-06-28 2020-10-30 维沃移动通信有限公司 Information display method, device and electronic device
CN111973889A (en) * 2020-07-24 2020-11-24 光朗(海南)生物科技有限责任公司 VR equipment with fused optical image distance and screen display content
CN115562490A (en) * 2022-10-12 2023-01-03 西北工业大学太仓长三角研究院 Cross-screen eye movement interaction method and system for aircraft cockpit based on deep learning

Also Published As

Publication number Publication date
US20230274507A1 (en) 2023-08-31
US11043036B2 (en) 2021-06-22
US20210319626A1 (en) 2021-10-14
US11935204B2 (en) 2024-03-19
US20200134926A1 (en) 2020-04-30
US11521360B2 (en) 2022-12-06

Similar Documents

Publication Publication Date Title
US20190012841A1 (en) Artificial intelligence enhanced system for adaptive control driven ar/vr visual aids
US12033291B2 (en) Systems for augmented reality visual aids and tools
ES2970058T3 (en) Procedure and system for providing photography-related recommendation information
US20190385342A1 (en) Wearable image manipulation and control system with micro-displays and augmentation of vision and sensing in augmented reality glasses
US20180365877A1 (en) Systems for adaptive control driven ar/vr visual aids
CN112601509B (en) Hybrid perspective augmented reality system and method for low vision users
US20190331920A1 (en) Improved Systems for Augmented Reality Visual Aids and Tools
US20160133170A1 (en) High resolution perception of content in a wide field of view of a head-mounted display
CN115359567A (en) Method and system for generating virtual and augmented reality
CN110431509A (en) Collapsible virtual reality device
WO2020014705A1 (en) Wearable image manipulation and control system with micro-displays and augmentation of vision and sensing in augmented reality glasses
US20110150276A1 (en) Identifying a characteristic of an individual utilizing facial recognition and providing a display for the individual
KR102077607B1 (en) Augmented Reality Projection Method For The First Aid Training To The Patient
WO2021103990A1 (en) Display method, electronic device, and system
CN107924229A (en) Image processing method and device in a kind of virtual reality device
CN105208370A (en) Display and calibration method of virtual-reality device
Barakat et al. Assistive technology for the visually impaired: Optimizing frame rate (freshness) to improve the performance of real-time objects detection application
Yao et al. Exploring the Use of Drones for Taking Accessible Selfies with Elderly
JP2022059095A (en) Recognition apparatus, wearable character recognition device, recognition method, and recognition program
US11838486B1 (en) Method and device for perspective correction using one or more keyframes
US20250299367A1 (en) Online calibration with convolutional neural network or other machine learning model for video see-through extended reality
EP4524688A1 (en) Disparity sensor for closed-loop active dimming control, and systems and methods of use thereof
CN111240466B (en) Auxiliary hearing text near-to-eye display adaptability adjusting method, control equipment and system
CN110198410A (en) Shooting reminding method, wearable device and storage medium
CN120722575A (en) Improved augmented reality rendering using automatic panel augmentation based on per degree hardware pixel estimation, and systems and methods of use thereof

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载