+

HK1181140B - Input methods for device having multi-language environment and related device and system thereof - Google Patents

Input methods for device having multi-language environment and related device and system thereof Download PDF

Info

Publication number
HK1181140B
HK1181140B HK13108213.9A HK13108213A HK1181140B HK 1181140 B HK1181140 B HK 1181140B HK 13108213 A HK13108213 A HK 13108213A HK 1181140 B HK1181140 B HK 1181140B
Authority
HK
Hong Kong
Prior art keywords
characters
input
area
touch
user
Prior art date
Application number
HK13108213.9A
Other languages
Chinese (zh)
Other versions
HK1181140A1 (en
Inventor
D.E.戈德史密斯
高野拓海
增井敏幸
L.D.科林斯
纪田康夫
K.科西恩达
Original Assignee
苹果公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/207,429 external-priority patent/US8661340B2/en
Application filed by 苹果公司 filed Critical 苹果公司
Publication of HK1181140A1 publication Critical patent/HK1181140A1/en
Publication of HK1181140B publication Critical patent/HK1181140B/en

Links

Description

Input method for device with multi-language environment and related device and system
The present application is a divisional application of the invention patent application having an application number of 200810149120.2, application date of 2008/9/12 entitled "input method for device having multilingual environment and related device and system".
Cross Reference to Related Applications
This application claims priority from U.S. provisional patent application 60/972,185 entitled "Input Methods for Device Having Mutli-Language Environment," filed on 13.9.2007, the contents of which are incorporated herein by reference.
Technical Field
The subject matter of the present application relates generally to input editing interfaces, and more particularly to input methods for devices having a multilingual environment and related devices and systems.
Background
A computer device may be configured to receive text and character input from a computer keyboard. Modern computer keyboards are composed of rectangular or nearly rectangular keys, and characters such as the letters a-Z of the english alphabet are usually engraved or printed on the keys. In most cases, each depression of a key corresponds to the entry of a single character.
For portable devices such as cellular phones, MPEG-1 audio layer 3 (MP 3) players, or Personal Digital Assistants (PDAs), a conventional computer keyboard may sometimes be too large. Some portable devices incorporate a smaller version of a conventional computer keyboard or use a virtual keyboard to receive user input. The virtual keyboard may take the form of a software application or software application features to simulate a computer keyboard. For example, on a touch-sensitive display of a PDA or communication device operated with a stylus, a user may use a virtual keyboard to enter text by selecting or marking (tabbing) keys of the virtual keyboard.
These smaller keyboards and virtual keyboards may have keys corresponding to one or more characters. For example, some keys may correspond by default to a general character in English, such as the letter "a", and some characters may also correspond to other additional characters, such as another letter or characterSuch letters with accent options, and some characters may also correspond to other characters with accent options. Due to the physical limitations (e.g., size) of the virtual keyboard, a user may find it difficult to type characters that are not readily available on the virtual keyboard.
The input method of a device having a multi-language environment may present particular challenges in terms of input and spelling correction that must be compatible with the selected language in order to ensure accuracy and efficient workflow.
Disclosure of Invention
On a touch-sensitive display, text input is corrected by displaying a list of candidate words on an interface that can be selected via touch input. The candidate list may include candidate words having two or more character types (e.g., roman, kana, japanese kanji). In one aspect, the candidate list may be scrolled using a finger gesture. When the user's finger passes over the candidate word, the position of the candidate word is adjusted (e.g., offset touch input), so that the candidate word is not obscured by the user's finger. When the touch is released, the candidate word is inserted into the document being edited. In another aspect, where characters may be erased by touching a key (e.g., a backspace or delete key) and performing a swipe, swipe or other finger gesture. A number of characters are erased that are proportional to the distance (e.g., linear distance) the finger gesture has traversed the display. If characters are present in the text entry area, the characters are erased first, followed by erasing the characters in the document being edited. In another aspect, typing errors that may occur on input are estimated by performing an auto-correction process in a Japanese environment.
Other embodiments are also disclosed, including embodiments directed to systems, methods, devices, computer-readable media, and user interfaces.
In accordance with some embodiments, there is provided an information processing method including: obtaining text input for a document being edited on a touch-sensitive display; determining whether the text input contains an incorrect character; determining a list of candidate words that are likely to be correct if the text input contains incorrect characters or if the text input is unclear; displaying the list of candidate words on a touch-sensitive display; acquiring a touch input for selecting one of the candidate words; and inserting the candidate word into the document being edited.
In accordance with some embodiments, there is provided an information processing method including: generating a user interface for editing text input on the touch-sensitive display, the user interface including a virtual keyboard, an editing area, and an input area; detecting a finger gesture from a key on the virtual keyboard, the finger gesture indicating an intent of a user to erase one or more characters of a text input displayed in the input area; and erasing a plurality of characters proportional to a distance traveled by the finger on the touch-sensitive display.
In accordance with some embodiments, there is provided an information processing method including: generating, on the touch-sensitive display, a user interface for selecting characters for a document being edited on the touch-sensitive display, the user interface including a virtual keyboard; detecting a touch input starting from a key of the virtual keyboard, the key being associated with a consonant or a vowel; and displaying on the touch-sensitive display a user interface element having a plurality of character options for consonants or vowels associated with the key, each character option selected by a user.
In accordance with some embodiments, there is provided a system comprising: a processor; and one or more modules stored in the memory and executed by the processor, the modules comprising: means for obtaining text input for a document being edited on a touch-sensitive display; means for determining whether the text input contains an incorrect character; means for determining a list of candidate words that are likely to be correct if the text input contains incorrect characters or if the text input is unclear; means for displaying the list of candidate words on a touch-sensitive display; means for obtaining a touch input selecting one of the candidate words; and inserting the candidate word into the document being edited.
In accordance with some embodiments, there is provided a system comprising: a processor; a memory coupled to the processor; and one or more modules stored in the memory and executed by the processor, the modules comprising: means for generating a user interface on a touch-sensitive display for editing text input, the user interface including a virtual keyboard, an editing area, and an input area; means for detecting a finger gesture from a key on the virtual keyboard, the finger gesture indicating an intent of a user to erase one or more characters of a text input displayed in the input area; and means for erasing a plurality of characters proportional to a distance traveled by the finger on the touch-sensitive display.
In accordance with some embodiments, there is provided a system comprising: a processor; a memory coupled to the processor; and one or more modules stored in the memory and executed by the processor, the modules comprising: means for generating, on a touch-sensitive display, a user interface for selecting characters for a document being edited on the touch-sensitive display, the user interface including a virtual keyboard; means for detecting a touch input from a key of the virtual keyboard, the key associated with a consonant or vowel; and means for displaying, on the touch-sensitive display, a user interface element having a plurality of character options for consonants or vowels associated with the key, each character option selected by a user.
In accordance with some embodiments, there is provided an information processing apparatus including: means for obtaining text input for a document being edited on the touch-sensitive display; means for determining whether the text input contains an incorrect character; means for determining a list of candidate words that are likely to be correct if the text input contains incorrect characters or if the text input is unclear; means for displaying the list of candidate words on a touch-sensitive display; means for obtaining a touch input selecting one of the candidate words; and means for inserting the candidate word into the document being edited.
Optionally, at least some of the text input is Japanese.
Optionally, wherein the list of candidate words comprises candidate words having characters of two or more character types.
Optionally, wherein the list of candidate words is determined according to one or more of a user selected language or statistics.
Optionally, wherein the list of candidate words is determined using an auto-correction search that takes into account typing errors that may occur in the text input.
Optionally, the apparatus for obtaining a touch input selecting one of the candidate words further comprises: means for detecting a finger gesture touching or passing one or more candidate words in the list of candidate words.
Optionally, the method further comprises: means for displaying, for each candidate word that is touched or passed by the detected finger gesture, the candidate word at a different location on the touch-sensitive display than the candidate word was displayed at the initial location at which the candidate word was displayed prior to the detection of the finger gesture.
In accordance with some embodiments, there is provided an information processing apparatus including: means for generating a user interface for editing text input on a touch-sensitive display, the user interface including a virtual keyboard, an editing area, and an input area; means for detecting a finger gesture from a key on the virtual keyboard, the finger gesture indicating an intent of a user to erase one or more characters of a text input displayed in the input area; and means for erasing a plurality of characters proportional to a distance traveled by the finger on the touch-sensitive display.
Alternatively, wherein the characters displayed in the input area are erased first, followed by the characters in the editing area being erased.
Optionally, wherein the number of characters erased is proportional to a distance traversed by the gesture limited by the virtual boundary of the virtual keyboard.
In accordance with some embodiments, there is provided an information processing apparatus including: means for generating, on a touch-sensitive display, a user interface for selecting characters for a document being edited on the touch-sensitive display, the user interface including a virtual keyboard; means for detecting a touch input from a key of the virtual keyboard, the key being associated with a consonant or vowel; and means for displaying on the touch-sensitive display a user interface element having a plurality of character options for consonants or vowels associated with the key, each character option selected by a user.
Optionally, the method further comprises: means for detecting a drag or swipe finger gesture that indicates an intent of a user to select one of the character options; and means for inserting the selected character option into the document being edited.
Optionally, at least some of the character options are Japanese.
Drawings
FIG. 1 shows an exemplary portable device for receiving text input.
FIG. 2 is a flow diagram of an exemplary process for correcting input in a multi-lingual environment.
FIG. 3 is a flow diagram of an exemplary process for erasing characters in a multi-language environment.
FIG. 4 is a block diagram of an example system architecture for performing the operations described with reference to FIG. 3.
FIG. 5 is a flow diagram of an exemplary process for displaying selectable character options for a document being edited.
Detailed Description
Input editing user interface
FIG. 1 shows an exemplary portable device 100 for receiving text input. The portable device 100 may be a telephone, a media player, an email device, or any other portable device capable of receiving text input. The device 100 includes a virtual keyboard 102, an editing area 106, and an input area 108. Each of these regions may be part of the touch-sensitive display 104. In some implementations, the touch-sensitive display 104 can be a multi-touch-sensitive display for receiving multi-touch inputs or finger gestures. For example, multi-touch-sensitive display 104 can process multiple simultaneous touch points, including processing data related to the pressure, extent, and/or location of each touch point. Such processing facilitates gestures and interactions, chording (chording), and other interactions using multiple fingers. Some examples of multi-touch sensitive display technology are described in U.S. Pat. Nos. 6,323,846, 6,570,557, 6,677,932, and U.S. patent publication 2002/0015024A1, each of which is incorporated herein by reference in its entirety.
The virtual keyboard 102 may be displayed in various layouts according to user selections. For example, the user may select to display one of a plurality of virtual keyboard layouts by using the operation buttons 120 or other finger gestures. As shown, the virtual keyboard 102 is an english keyboard layout (e.g., QWERTY). However, the keyboard layout may be configured according to a selected language, such as Japanese, French, German, Italian, and so forth. In the japanese environment, the user can switch between a kana keyboard, a roman character keyboard, and a keyboard for japanese kanji symbols.
A user may enter text into a document (e.g., text document, instant message, email, address book) in edit area 106 by interacting with virtual keyboard 102. As the user enters characters, an input correction process is activated, and the process may detect text entry errors and display candidate words 112 in the input area 108. Any number of candidate words 112 may be generated. The displayed group of candidate words 112 may include candidate words 112 having characters of two or more character types (e.g., roman, kana, kanji). In some embodiments, clicking on arrow 114 or other user interface element causes a new page of candidate words 112 to be displayed in input area 108, thereby allowing additional candidate words 112 to be displayed. In some embodiments, the candidate list may be determined based on a user selected language and statistical information (e.g., a user dictionary or user-entered data history for the user selected language). An exemplary method for determining Virtual Keyboard correction options is described in U.S. patent application 11/228,737, entitled "Activating Virtual Keys of Touch-screen Virtual Keys," which is hereby incorporated by reference in its entirety.
In some implementations, the candidate word finding process is performed using an auto-correction search. In performing an auto-correct search, a list of candidate words may be generated from the text input and taking into account typing errors that may be present in the text input.
List of candidate words
In the illustrated example, the user has selected one candidate word 110 to replace "touky" in the context of japanese. The candidate word 110 is selected by the user touching candidate word 110 using one or more fingers. When the user releases the touch, the selected candidate word 110 is inserted into the document in edit region 106. In some implementations, when a user touches a candidate word 110, the candidate word 110 is displayed at a different location (e.g., some offset location) on the touch-sensitive display 104 to prevent the user's finger from obscuring the candidate word 110. The user may scroll through the candidate list by swiping a finger over the candidate word 112. As the finger passes over each candidate word 112, the candidate word is displayed at a different location. For example, the user may swipe their index finger across candidate word 112 in input region 108 until the user reaches candidate word 110. When the user releases the touch, candidate word 110 is inserted into the document being edited.
FIG. 2 is a flow diagram of an exemplary process 200 for correcting input in a multi-lingual environment. In some implementations, the process 200 begins when text input is obtained for a document being edited on a touch-sensitive display (202). The text input may be obtained when one or more touches or finger gestures are performed (e.g., on a virtual keyboard). For example, some or all of the text entries may take the form of roman characters or japanese characters (e.g., kana or japanese kanji). Process 200 may then determine whether the text input contains one or more incorrect characters (204). For example, language dictionaries, statistics, and/or fuzzy logic may be used to determine incorrect text input.
If the text input contains incorrect characters or if the text input is ambiguous, a candidate list of possible correct candidate words is determined (206) and displayed (208) to the user on the touch-sensitive display. For example, in the Japanese context, if the text input is a phonetic spelling of the Roman character form of Japanese characters, the candidate list may include candidate words having two or more character types (e.g., Japanese Kanji and kana). Even if the text input does not include incorrect characters, there may still be uncertainty in the conversion from Roman characters to Japanese characters. To account for this uncertainty, process 200 includes determining a candidate list of a plurality of possible correct candidate words, thereby allowing the user to select a desired roman-to-japanese conversion in the presence of a roman-to-japanese conversion in the candidate list. Any number of candidate words may be included in the candidate list. Also for example, the list may be displayed in a dedicated area of the touch-sensitive display (e.g., input area 108).
The user may use the user's finger to scroll through the candidate list. When the finger passes over (or adjacent to) a candidate word, the candidate word may be displayed at a different location on the touch-sensitive display that deviates from the original location of the candidate word, thereby preventing the user's finger from obscuring the selected candidate word. After a touch input (e.g., one or more touch or finger gestures) is obtained for a selected candidate word (210), the selected candidate word is inserted into the document being edited (212).
Erase character
In the illustrated example, a user may erase a character in the text input by touching backspace or delete key 116 and then sliding their finger from key 116 toward the opposite end of virtual keyboard 112. As the user slides their finger, a number of characters are erased that are proportional to the distance the finger has traveled on the touch-sensitive display 104. If there are characters in the input area 108 (e.g., characters currently being added to the document), then these characters may be erased first. When the characters in the input area 108 are exhausted, the characters in the edit area 106 (e.g., the characters in the words previously entered into the document) may be erased.
FIG. 3 is a flow diagram of an exemplary process 300 for erasing characters in a multi-lingual environment. In some implementations, the process 300 begins with generating a user interface for editing text input on a touch-sensitive display (302). The user interface may include a virtual keyboard, an editing area, and a text entry area. Finger touches and gestures are detected beginning with a key (e.g., backspace key, delete key) on the virtual keyboard indicating that the user intends to erase one or more characters of a text input displayed in the input area (304). In some implementations, the gesture can be a finger swipe or swipe on the touch-sensitive display starting from the touched key. The swipe or swipe can be in any direction on the touch-sensitive display. The distance of the swipe or gesture that results in character erasure (e.g., the straight-line distance that a finger passes over the display) may be limited by the visual boundaries of a virtual keyboard displayed on the touch-sensitive display or any other desired boundaries. The number of characters erased due to the gesture may be proportional to the linear distance traveled by the finger on the touch-sensitive display (306). In some embodiments, as described with reference to FIG. 1, the characters displayed in the input area are erased first, and the characters in the edit area are subsequently erased.
Exemplary System architecture
FIG. 4 is a block diagram of an example system architecture 400 for performing the various operations described with reference to FIGS. 1-3. For example, the architecture 400 may be included in the portable device 100 described with reference to fig. 1. The architecture 400 includes a processor 410, a memory 420, a storage device 430, and an input/output device 440. Each of the components 410, 420, 430, and 440 are interconnected using a system bus 450. Processor 410 is capable of processing instructions that operate within architecture 400. In some implementations, the processor 410 is a single-threaded processor. In other implementations, the processor 410 is a multi-threaded processor. The processor 410 is capable of processing instructions stored in the memory 420 or on the storage device 430 to display graphical information for a user interface on the input/output device 440.
Memory 420 stores information internal to architecture 400. In some implementations, the memory 420 is a computer-readable medium. In other embodiments, memory 420 is a volatile memory unit. Moreover, in other embodiments, the memory 420 is a non-volatile memory unit.
The storage device 430 can provide mass storage for the architecture 400. In some implementations, the storage device 430 is a computer-readable medium. In various different implementations, the storage device 430 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
Input/output devices 440 provide input/output operations for architecture 400. In some implementations, the input/output device 440 includes a keyboard and/or a pointing device. In other embodiments, the input/output device 440 includes a display unit for displaying a graphical user interface.
The described features can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. These features can be implemented in a computer program product tangibly embodied in an information carrier and executed by a programmable processor, e.g., in a machine-readable storage device or in a propagated signal; method steps may be performed by a programmable processor executing a program of instructions to perform functions of the described embodiments by operating on input data and generating output. Advantageously, the described features can be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive and transfer data and instructions from, and to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor receives instructions and data from a read-only memory, a random access memory, or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and memory may also supplement or incorporate an ASIC (application specific integrated circuit).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can also be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface, an Internet browser, or a combination of them. The components of the system can be connected by any form or medium of digital data communication, such as a communication network. Examples of such communication networks include, by way of example, a LAN, a WAN, a wireless network, and the computers and networks that make up the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and are typically connected by a network as described above with reference to fig. 1. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Other embodiments
In a Japanese environment, application of user interface elements (e.g., pop-up menus or heads-up displays) in accordance with virtual keyboard keys may be used to select distinct characters. Each consonant and vowel may have a key. In one embodiment, if the user touches and slides a virtual keyboard key, a pop-up menu is opened that lets the user select the syllable with the consonant (or without a consonant) and the appropriate vowel. If the "k (ka)" button is dragged, the user is given the choice of ka, ki, ku, ke or ko. If the vowel key is dragged, the user is given a, i, u, e, o, etc. choice.
When the user selects a vowel by sliding horizontally, the user is given a choice of variant if the drag direction is changed to vertical. For example, if the user starts on the "k (ka)" button and slides to the right, the user will see options for ka, ki, ku, ke, and ko. If the user slides down, the options change to ga, gi, gu, ge, and go, and the user can again slide horizontally to select these syllables starting with the "g" consonant. In addition, the user can also slide up, thereby giving each pop-up menu x (e.g., 3 lines) more options (e.g., no shift, down shift, up shift).
If the user hits a key, the user gets a wildcard (ambiguous) character that can be matched to anything the user generates using the key. If the key of "k (ka)" is hit, the user is given something matching ka; the syllables at this position are considered. This wildcard character can be converted into an unambiguous syllable or character by sliding it over the wildcard character in exactly the same way as the user can slide over a key.
FIG. 5 is a flow diagram of an exemplary process 500 for displaying selectable character options for a document being edited. In some implementations, process 500 begins by generating a user interface on a touch-sensitive display that selects characters for a document being edited on the touch-sensitive display (502). The user interface may include a virtual keyboard. Touch input is detected from a key of the virtual keyboard, where the key is associated with a consonant or a vowel (504). In some implementations, the touch input can be a finger swipe or swipe on the touch-sensitive display starting with the touched key. A user interface element is displayed on the touch-sensitive display, where the user interface element (e.g., a pop-up menu) includes a plurality of character options for a consonant or vowel associated with the key (506). Each character option may be selected by the user. In some embodiments, at least some of the character options are Japanese. Further, in some embodiments, a dragging or sliding finger gesture is detected (508). The finger gesture may indicate that the user intends to select one of the character options. Once the finger gesture is detected, the selected character option may be inserted into the document being edited (510).
Thus, the steps of the method according to the present invention can be implemented by one or more functional modules running in a general-purpose processor or a dedicated chip, and the technical solutions formed by these functional modules and their combination with some hardware as described in fig. 4 are of course within the scope of the present invention.
Various embodiments have been described herein. It will be understood that various modifications are possible. For example, components of one or more embodiments may be combined, deleted, modified, or supplemented to form yet another embodiment. The logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be deleted, in the described flows, and other components may be added to, or deleted from, the described systems. Accordingly, other embodiments are within the scope of the following claims.

Claims (6)

1. A method for erasing characters in a multilingual environment, the method comprising:
generating a user interface for editing text input on a touch-sensitive display, the user interface including an editing area and an input area, and a virtual keyboard located outside the editing area and the input area, the virtual keyboard having keys for typing characters in the editing area or the input area;
detecting a user gesture within an area of the virtual keyboard, the user gesture starting from a key on the virtual keyboard, wherein the user gesture is operative to cause erasure of one or more characters of a text input displayed in the input area or the editing area, wherein if a character is displayed in the input area, the character displayed in the input area is erased first; and
erasing a plurality of characters proportional to a distance traveled by the user gesture on the touch-sensitive display.
2. The method of claim 1, wherein characters in the edit area are erased when the characters displayed in the input area are exhausted.
3. The method of claim 1, wherein the number of characters erased is proportional to the distance traversed by the user gesture limited by a virtual boundary of the virtual keyboard.
4. An apparatus for erasing characters in a multilingual environment, the apparatus comprising:
means for generating a user interface on a touch-sensitive display for editing text input, the user interface including an editing area and an input area, and a virtual keyboard located outside the editing area and the input area, the virtual keyboard having keys for typing characters in the editing area or the input area;
means for detecting a user gesture within an area of the virtual keyboard, the user gesture starting from a key on the virtual keyboard, wherein the user gesture is operative to cause erasure of one or more characters of a text input displayed in the input area or the editing area, wherein if a character is displayed in the input area, the character displayed in the input area is erased first; and
means for erasing a plurality of characters proportional to a distance traversed by the user gesture on the touch-sensitive display.
5. The apparatus of claim 4, wherein characters in the edit region are erased when the characters displayed in the input region are exhausted.
6. The device of claim 4, wherein the number of characters erased is proportional to the distance traversed by the user gesture limited by a virtual boundary of the virtual keyboard.
HK13108213.9A 2007-09-13 2013-07-12 Input methods for device having multi-language environment and related device and system thereof HK1181140B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US97218507P 2007-09-13 2007-09-13
US60/972,185 2007-09-13
US12/207,429 2008-09-09
US12/207,429 US8661340B2 (en) 2007-09-13 2008-09-09 Input methods for device having multi-language environment

Publications (2)

Publication Number Publication Date
HK1181140A1 HK1181140A1 (en) 2013-11-01
HK1181140B true HK1181140B (en) 2016-01-15

Family

ID=

Similar Documents

Publication Publication Date Title
US9465536B2 (en) Input methods for device having multi-language environment
US20090058823A1 (en) Virtual Keyboards in Multi-Language Environment
CN103026318B (en) Input method editor
US8739055B2 (en) Correction of typographical errors on touch displays
US10838513B2 (en) Responding to selection of a displayed character string
US20080154576A1 (en) Processing of reduced-set user input text with selected one of multiple vocabularies and resolution modalities
US20140078065A1 (en) Predictive Keyboard With Suppressed Keys
US20130002553A1 (en) Character entry apparatus and associated methods
US20130007606A1 (en) Text deletion
US8704761B2 (en) Input method editor
JP2005275635A (en) Method and program for japanese kana character input
HK1181140B (en) Input methods for device having multi-language environment and related device and system thereof
HK1130914B (en) Input methods for device having multi-language environment
KR102869439B1 (en) Character input device implemented in software
KR20240081804A (en) Character input device implemented in software
HK1183952A (en) Input method editor
HK1137525B (en) Language input interface on a device
HK1137525A1 (en) Language input interface on a device
HK1169191B (en) Input method editor
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载