+

CN119166094A - A display device and a sound effect configuration method - Google Patents

A display device and a sound effect configuration method Download PDF

Info

Publication number
CN119166094A
CN119166094A CN202411116832.XA CN202411116832A CN119166094A CN 119166094 A CN119166094 A CN 119166094A CN 202411116832 A CN202411116832 A CN 202411116832A CN 119166094 A CN119166094 A CN 119166094A
Authority
CN
China
Prior art keywords
sound effect
audio
option
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411116832.XA
Other languages
Chinese (zh)
Inventor
刘文晓
李仁锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202411116832.XA priority Critical patent/CN119166094A/en
Publication of CN119166094A publication Critical patent/CN119166094A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请一些实施例提供一种显示设备及音效配置方法,所述音效配置方法可以在播放第一媒资数据时,解析第一媒资数据的第一音效选项,以及,控制显示器显示包括第一音效选项的音效选择界面。并响应于用户基于音效选择界面的音效选择指令,控制音频输出装置以目标音效选项播放第一音频数据。解析第二媒资数据的第二音效选项,若第二音效选项包括目标音效选项,则控制音频输出装置以目标音效选项播放第二音频数据,若第二音效选项不包括目标音效选项,根据第二音效选项更新音效选择界面,以及,控制显示器显示更新后的音效选择界面。

Some embodiments of the present application provide a display device and a sound effect configuration method, wherein the sound effect configuration method can parse a first sound effect option of the first media asset data when playing the first media asset data, and control the display to display a sound effect selection interface including the first sound effect option. In response to a sound effect selection instruction of the user based on the sound effect selection interface, control the audio output device to play the first audio data with a target sound effect option. Parse the second sound effect option of the second media asset data, and if the second sound effect option includes the target sound effect option, control the audio output device to play the second audio data with the target sound effect option. If the second sound effect option does not include the target sound effect option, update the sound effect selection interface according to the second sound effect option, and control the display to display the updated sound effect selection interface.

Description

Display device and sound effect configuration method
Technical Field
The present application relates to the field of display devices, and in particular, to a display device and an audio configuration method.
Background
When the display equipment plays the media data, the audio options which can be played by the media data can be analyzed, and an audio interface is generated according to the audio options. The user may select at least one sound option in the sound interface to control the display device to play the media asset data with the selected sound.
After the current media data is played, the display device can automatically switch to the continuous playing media data of the current media data for playing. When the continuous broadcasting media data is switched and played, the sound effect options supported by the media data are not determined, so that the display equipment plays according to the initial sound effect options of the continuous broadcasting media data, and the user needs to manually adjust the sound effect options, so that the user experience is reduced.
Disclosure of Invention
The application provides a display device and an audio configuration method, which are used for solving the problem that the audio needs to be readjusted when the display device switches and plays media data.
In a first aspect, some embodiments of the present application provide a display apparatus comprising a display configured to display a user interface, an audio output device configured to play audio data, and a controller configured to:
when first media data are played, analyzing first sound effect options of the first media data, and controlling the display to display a sound effect selection interface, wherein the sound effect selection interface comprises the first sound effect options, and the first media data comprise first audio data;
Responding to an audio selection instruction input by a user based on the audio selection interface, controlling the audio output device to play the first audio data in a target audio option, wherein the target audio option is a first audio option designated by the audio selection instruction;
Analyzing a second sound effect option of second media data, wherein the second media data is next media data adjacent to the first media data in a media play list, and the second media data comprises second audio data;
If the second sound effect option comprises the target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option;
And if the second sound effect option does not comprise the target sound effect option, updating a sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
In some embodiments, the controller executing a first sound effect option to parse the first media asset data is configured to:
Decoding the first media asset code to obtain first media asset data;
analyzing the first media asset data to obtain sound effect scene information of the first media asset data;
and generating the first sound effect option according to the sound effect scene information.
In some embodiments, before the step of controlling the display to display the sound effect selection interface, the controller is further configured to:
Analyzing sound effect configuration information in the sound effect scene information based on the first sound effect option, wherein the sound effect configuration information comprises current configuration information and optional configuration information;
And generating a sound effect selection interface according to the current configuration information and the optional configuration information.
In some embodiments, the first media asset data further includes first subtitle data, and the controller is configured to control the audio output device to play the first audio data with the target audio option:
Analyzing the target sound effect options to obtain target language information;
Acquiring target subtitle data according to the target language information, and updating the first audio data according to the target language information;
aligning the time sequence relation of the target caption data and the updated first audio data;
and according to the time sequence relation, controlling the audio output device to play the first audio data, and controlling the display to display the target subtitle data.
In some embodiments, the controller performs controlling the audio output device to play the first audio data with a target sound effect option, specifically configured to:
Analyzing the target sound effect options at an application layer to obtain target configuration information;
the target configuration information is sent to a middleware layer through a first interface, and the first interface is used for connecting the application layer and the middleware layer;
the target configuration information is sent to a kernel layer through a second interface;
at the kernel layer, updating the sound effect scene information of the first media asset data based on the target configuration information;
and sending the updated sound effect scene information from the kernel layer to the application layer to acquire updated first audio data.
In some embodiments, the controller performing controlling the audio output device to play the second audio data with the target sound effect option is configured to:
Acquiring the second audio data;
analyzing the sound effect scene information of the second audio data;
configuring sound effect scene information of the second audio data according to the target configuration information;
and updating the second audio data according to the configured sound effect scene information, and controlling the audio output device to play the updated second audio data.
In some embodiments, the controller performs sending the updated sound effect scene information from the kernel layer to the application layer, configured to:
acquiring configuration parameters of the sound effect scene information;
Sending the configuration parameters to the application layer;
and generating sound effect configuration information of the first audio data according to the configuration parameters so as to update the first audio data.
In some embodiments, the initial position of the sound effect scene information is a kernel layer, and the controller is configured to parse sound effect configuration information in the sound effect scene information:
acquiring an information format of the sound effect scene information at the kernel layer;
transmitting the sound effect scene information to the application layer;
and at the application layer, analyzing the sound effect scene information according to the analysis format corresponding to the information format to obtain sound effect configuration information.
In some embodiments, the controller executes an update sound effect selection interface according to the second sound effect option, configured to:
Traversing the second sound effect options to obtain sound effect options to be deleted and new sound effect options, wherein the new sound effect options are sound effect options except the first sound effect options in the second sound effect options;
and updating the sound effect selection interface according to the sound effect option to be deleted and the new sound effect option.
In a second aspect, the present application provides an audio configuration method, which is applied to the display device in the first aspect, and the method includes:
when first media data are played, analyzing first sound effect options of the first media data, and controlling the display to display a sound effect selection interface, wherein the sound effect selection interface comprises the first sound effect options, and the first media data comprise first audio data;
Responding to an audio selection instruction input by a user based on the audio selection interface, controlling the audio output device to play the first audio data in a target audio option, wherein the target audio option is a first audio option designated by the audio selection instruction;
Analyzing a second sound effect option of second media data, wherein the second media data is next media data adjacent to the first media data in a media play list, and the second media data comprises second audio data;
If the second sound effect option comprises the target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option;
And if the second sound effect option does not comprise the target sound effect option, updating a sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
As can be seen from the above technical solutions, some embodiments of the present application provide a display device and an audio configuration method, where the audio configuration method can analyze a first audio option of first media data when playing the first media data, and control a display to display an audio selection interface including the first audio option. And controlling the audio output device to play the first audio data with the target sound effect option in response to a sound effect selection instruction based on the sound effect selection interface by the user. Analyzing a second sound effect option of the second media data, if the second sound effect option comprises a target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option, if the second sound effect option does not comprise the target sound effect option, updating the sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device according to some embodiments of the present application;
fig. 2 is a schematic diagram of a hardware configuration of a display device according to some embodiments of the present application;
FIG. 3 is a schematic software configuration of a display device according to some embodiments of the present application;
FIG. 4 is a flowchart of a display device according to some embodiments of the present application that continues to play with default sound options;
FIG. 5 is a flowchart of a display device according to some embodiments of the present application performing audio configuration;
FIG. 6 is a flowchart of a display device performing audio configuration according to some embodiments of the present application;
FIG. 7 is a flowchart of a display device according to some embodiments of the present application determining language options based on language audio tracks;
FIG. 8 is a flow chart of a display device for playback with a target sound effect option according to some embodiments of the present application;
FIG. 9 is a flowchart of a display device update sound selection interface according to some embodiments of the present application;
FIG. 10 is a bottom logic diagram of a display device generation sound selection interface provided by some embodiments of the present application;
FIG. 11 is a flowchart of a display device generating a sound selection interface according to sound configuration information in some embodiments of the present application;
fig. 12 is a flowchart of a display device replacing subtitle data in some embodiments of the present application;
Fig. 13 is a bottom logic diagram of a display device updating audio scene information in some embodiments of the application.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the application. Merely exemplary of systems and methods consistent with aspects of the application as set forth in the claims.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware or/and software code that is capable of performing the function associated with that element.
In the embodiment of the present application, the display device 200 generally refers to a device having a screen display and a data processing capability. For example, display device 200 includes, but is not limited to, a smart television, a mobile terminal, a computer, a monitor, an advertising screen, a wearable device, a virtual reality device, an augmented reality device, and the like.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device according to some embodiments of the present application. As shown in fig. 1, a user may operate the display device 200 through a touch operation, the mobile terminal 300, and the control device 100. Wherein the control device 100 is configured to receive an operation instruction input by a user, and convert the operation instruction into a control instruction recognizable and responsive by the display device 200. For example, the control device 100 may be a remote control, a stylus, a handle, or the like.
The mobile terminal 300 may serve as a control device for performing man-machine interaction between a user and the display device 200. The mobile terminal 300 may also be used as a communication device for establishing a communication connection with the display device 200 for data interaction. In some embodiments, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. The audio/video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
In some embodiments, the mobile terminal 300 or other electronic device may also simulate the functions of the control device 100 by running an application program that controls the display device 200.
As also shown in fig. 1, the display device 200 is also in data communication with the server 400 via a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks.
The display device 200 may provide a broadcast receiving tv function, and may additionally provide an intelligent network tv function of a computer supporting function, including, but not limited to, a network tv, an intelligent tv, an Internet Protocol Tv (IPTV), etc.
Fig. 2 is a block diagram of a hardware configuration of the display device 200 of fig. 1 according to some embodiments of the present application.
In some embodiments, the display apparatus 200 may include at least one of a modem 210, a communication device 220, a detector 230, a device interface 240, a controller 250, a display 260, an audio output device 270, a memory, a power supply, a user input interface.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, the detector 230 includes a light receiver for collecting a sensor of the intensity of ambient light, or the detector 230 includes an image collector such as a camera that may be used to collect external ambient scenes, user attributes or user interaction gestures, or the detector 230 includes a sound collector such as a microphone or the like for receiving external sounds.
In some embodiments, display 260 includes display functionality for presenting pictures, and a drive component that drives the display of images. The display 260 is used for receiving and displaying image signals output from the controller 250. For example, the display 260 may be used to display video content, image content, and components of menu manipulation interfaces, user manipulation UI interfaces, and the like.
In some embodiments, the communication apparatus 220 is a component for communicating with an external device or server 400 according to various communication protocol types. The display apparatus 200 may be provided with a plurality of communication devices 220 according to the supported communication manner. For example, when the display apparatus 200 supports wireless network communication, the display apparatus 200 may be provided with a communication device 220 including a WiFi function. When the display apparatus 200 supports bluetooth connection communication, the display apparatus 200 needs to be provided with a communication device 220 including a bluetooth function.
The communication means 220 may communicatively connect the display device 200 with an external device or the server 400 by means of a wireless or wired connection. Wherein the wired connection may connect the display device 200 with an external device through a data line, an interface, etc. The wireless connection may then connect the display device 200 with an external device through a wireless signal or a wireless network. The display device 200 may directly establish a connection with an external device, or may indirectly establish a connection through a gateway, a route, a connection device, or the like.
In some embodiments, the controller 250 may include at least one of a central processor, a video processor, an audio processor, a graphic processor, a power supply processor, first to nth interfaces for input/output, and the controller 250 controls the operation of the display device and responds to the user's operation through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
In some embodiments, a user may input a user command through a graphical user interface (GRAPHICAL USER INTERFACE, GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI).
In some embodiments, audio output device 270 may be a speaker local to display device 200 or an audio output device external to display device 200. For an external audio output device of the display device 200, the display device 200 may also be provided with an external audio output terminal, and the audio output device may be connected to the display device 200 through the external audio output terminal to output sound of the display device 200.
In some embodiments, user input interface 280 may be used to receive instructions from user input.
To perform user interactions, in some embodiments, display device 200 may be run with an operating system. The operating system is a computer program for managing and controlling hardware resources and software resources in the display device 200. The operating system may control the display device to provide a user interface, for example, the operating system may directly control the display device to provide a user interface, or may run an application to provide a user interface. The operating system also allows a user to interact with the display device 200.
It should be noted that, the operating system may be a native operating system based on a specific operating platform, a third party operating system customized based on a depth of the specific operating platform, or an independent operating system specially developed for a display device.
The operating system may be divided into different modules or tiers depending on the functionality implemented, for example, as shown in FIG. 3, in some embodiments the system is divided into four layers, an application layer (simply "application layer"), an application framework layer (Application Framework) layer (simply "framework layer"), a system library layer, and a kernel layer, from top to bottom, respectively.
In some embodiments, the application layer is used to provide services and interfaces for applications so that the display device 200 can run applications and interact with users based on the applications. The application layer may be run with at least one application program, which may be a Window (Window) program, a system setting program, or a clock program of the operating system, or may be an application program developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an application programming interface (Application Programming Interface, API) and programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution.
As shown in fig. 3, in the embodiment of the present application, the application framework layer includes a view system (VIEW SYSTEM), a manager (Managers), a Content Provider (Content Provider), and the like, where the view system may design and implement interfaces and interactions of the application, and the view system includes a list (lists), a network (grids), text boxes, buttons (buttons), and the like. The Manager includes at least one of an activity Manager (ACTIVITY MANAGER) for interacting with all activities running in the system, a Location Manager (Location Manager) for providing system Location service access to system services or applications, a package Manager (PACKAGE MANAGER) for retrieving various information about application packages currently installed on the device, a notification Manager (Notification Manager) for controlling the display and removal of notification messages, and a Window Manager (Window Manager) for managing icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the individual applications as well as the usual navigation rollback functions, such as controlling the exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of a display screen, judging whether a status bar exists or not, locking the screen, intercepting the screen, controlling the change of the display window, for example, reducing the display window to display, dithering display, distorting display and the like.
In some embodiments, the system runtime layer may provide support for the framework layer, and when the framework layer is in use, the operating system may run instruction libraries, such as the C/C++ instruction library, contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a functional hierarchy between the hardware and software of the display device 200. The kernel layer can realize the functions of hardware abstraction, multitasking, memory management and the like. For example, as shown in FIG. 3, a hardware driver may be configured in the kernel layer, where the driver included in the kernel layer may be at least one of an audio driver, a display driver, a Bluetooth driver, a camera driver, a WIFI driver, a USB driver, an HDMI driver, a sensor driver (such as a fingerprint sensor, a temperature sensor, a pressure sensor, etc.), and a power driver.
It should be noted that the above examples are merely a simple division of functions of an operating system, and do not limit the specific form of the operating system of the display device 200 in the embodiment of the present application, and the number of levels and specific types of levels included in the operating system may be expressed in other forms according to factors such as the functions of the display device and the type of the operating system.
In some embodiments, the display device 200 may display a user interface in response to an interactive instruction entered by a user. The user interface may include a plurality of application controls, e.g., a media play control, a game control, a browser control, etc., and the user may input a launch instruction based on the plurality of application controls in the user interface to control the display device 200 to launch a corresponding application. Taking the example of starting the media asset playing control, the display device 200 may display a media asset playing interface, where the media asset playing interface includes a media asset list, so as to provide playable media asset data to the user.
When playing the media data, the display device 200 may parse the media data to obtain configuration options of the media data, where the configuration options include media definition, sound effect options, or play options. The display device 200 may generate a configuration interface according to the configuration options, for example, the display device 200 may obtain, by parsing, the audio options that the media data may play, and generate an audio interface according to the audio options, so as to provide the user with the audio options that the current media data may play, and the user may select the audio options in the audio interface and control the display device 200 to play the media data with the selected audio.
In some embodiments, as shown in fig. 4, the media asset playing control may include an automatic playing function, that is, after the display device 200 finishes playing the current media asset data, it automatically switches to the continuous playing of the media asset data located in the current media asset data, so as to continuously play the media asset data.
However, because the sound effect options supported by different media data are different, when the display device 200 switches to play the continuous playing media data, the sound effect options supported by the continuous playing media data are uncertain, so that the display device plays according to the initial sound effect options of the continuous playing media data. If the initial sound effect option of the continuous playing media resource data is different from the sound effect option selected by the user based on the sound effect interface, the user needs to manually adjust the sound effect option so as to switch the initial sound effect option back to the selected sound effect option.
In addition, if the media asset data does not support the selected media asset option, the user needs to re-input the instruction to control the display device 200 to select the interface based on the display sound effect and select a new sound effect option based on the sound effect selection interface, so that in the process of media asset data switching, the process of adjusting the sound effect option is complicated, and the user experience is reduced.
In some embodiments, the display apparatus 200 may perform sound effect configuration when switching play media data, that is, some embodiments of the present application provide a display apparatus 200, the display apparatus 200 including a display 260, an audio output device 270, and a controller 250. Wherein the display 260 is configured to display a user interface. The audio output device 270 is configured to play audio data. The controller 250 is configured to perform an audio configuration method, and fig. 5 is a flowchart of performing audio configuration by the display device according to some embodiments of the present application. Fig. 6 is a timing diagram illustrating a display device performing audio configuration according to some embodiments of the present application. The display device 200 performs sound effect configuration in accordance with the timing relationship shown in fig. 6. Referring to fig. 5 and 6, the method includes the following:
and S100, when the first media data are played, analyzing first sound effect options of the first media data, and controlling the display to display a sound effect selection interface.
The first media asset data is currently played by the display device, and the display device 200 may determine the first media asset data in response to a selection instruction input by the user based on the media asset list while the display 260 displays the media asset playing interface. The media resource list may include a plurality of media resource data items for the user to select for viewing, for example, when the media resource list includes media resource data a, media resource data B, and media resource data C, the selection instruction selects the media resource data a to play, where the media resource data a is the first media resource data.
For video assets, the first asset data may include audio data, i.e., first audio data, and also video data, i.e., first asset video. For audio assets, the first asset data may also include only the first audio data. In the process of playing the media data by the display device 200, the controller 250 may control the audio output device 270 to play the first audio data, and may also control the display 260 to display the media picture corresponding to the first media video in synchronization with the playing progress of the first audio data, so as to achieve the effect of playing the first media data synchronously with the audio and the video.
When playing the first media asset data, the controller 250 may analyze a first audio effect option of the first media asset data, where the first audio effect option is an audio playing effect that the first media asset data can currently support, that is, the display device 200 may play the first media asset data according to at least one first audio effect option, so as to control the audio output apparatus 270 to play the audio portion of the first media asset data with the audio playing effect corresponding to the first audio effect option. The first sound effect options may include language options, ambient sound options, soundtrack play options, feature sound options, or the like. When the display device 200 plays the first media asset data, the controller 250 may generate the sound effect control and control the display 260 to display the sound effect control in a designated area in the media asset screen. The display device 200 may respond to the response event from the sound control, and control the display 260 to display a sound selection interface, where the sound selection interface may be generated according to the parsed first sound options, and thus the sound selection interface includes all the parsed first sound options, so as to prompt the user for the sound options supportable by the first media data.
And S200, responding to an audio selection instruction input by a user based on the audio selection interface, and controlling the audio output device to play the first audio data with a target audio option.
After the display 260 displays the sound effect selection interface, the user may input a sound effect selection instruction based on the sound effect selection interface to select a target sound effect option among the first sound effect options in the sound effect selection interface. The display apparatus 200 may control the audio output device to play the first audio data with the target sound effect option in response to the sound effect selection instruction. For example, the target sound effect option is a left channel play mode, and the controller 250 plays the first audio data in the left channel play mode.
S300, analyzing a second sound effect option of the second media data.
The second media asset data is the next media asset data in the media asset playlist adjacent to the first media asset data. For example, when the media asset list includes media asset data a, media asset data B, and media asset data C, the display device 200 should sequentially play three media asset data according to the media asset data a, the media asset data B, and the media asset data C without responding to other control instructions. After the playing of the media asset data a is finished, the display device 200 will automatically play the media asset data B, and after the playing of the media asset data B is finished, the display device 200 will automatically play the media asset data C. When the first media data is media data A, the second media data is media data B.
After the display device 200 finishes playing the first media data, the controller 250 may analyze the second sound effect option of the second media data so as to perform subsequent sound effect configuration on the second media data according to the second sound effect option, so as to continue playing the second media data according to the second sound effect option after the first media data finishes playing.
In some embodiments, the first sound effect option and the second sound effect option may each include a language option, where the language option is used to represent language information with switchable media data. Taking the language option of the second audio option as an example, the controller 250 may determine the language option of the second media data by parsing the language audio track of the second media data. For example, as shown in fig. 7, when parsing the second media data, the second media data may be parsed to include a language audio track of chinese language, a language audio track of english language, and a language audio track of spanish. The language audio track corresponds to second media data of one language type, and the controller 250 may determine that the language options of the second media data are chinese options, english options, and spanish options according to the media track.
Corresponding to the language audio track, the second media data should at least include media data of chinese language, media data of english language, and media data of spanish language, and the controller 250 may perform decoding on the media data of the corresponding language according to the language audio track and control the audio output device 270 to play audio data in the decoded media data. For example, when the target audio option is a chinese language option, the controller 250 may decode the media asset data of the chinese language through the chinese language audio track and control the audio output device 270 to play the decoded media asset data of the chinese language.
And S400, if the second sound effect option comprises the target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option.
When parsing the second sound effect option, as shown in fig. 8, the controller 250 may traverse the second sound effect option according to the target sound effect option. The option type of the second sound effect option may be the same as that of the first sound effect option, for example, the second sound effect option may include a language option, an environment sound option, a sound channel playing option, a feature sound option, or the like.
When the second audio effect option includes the target audio effect option, the display apparatus 200 may directly control the audio output device 270 to play the second audio data according to the target audio effect option. For example, when the target sound option is a stereo surround sound option. After the audio output device 270 finishes playing the first audio data with the stereo surround sound option, when the display apparatus 200 switches to playing the second media data, the controller 250 parses that the second audio option includes the stereo surround sound option, and then the controller 250 directly controls the audio output device 270 to play the second audio data according to the stereo surround sound option, so as to reduce the process that the user needs to manually switch back to the target audio option, and improve the audio configuration efficiency of the media data.
And S500, if the second sound effect option does not comprise the target sound effect option, updating a sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
If the second audio data does not include the target audio option, the display device 200 needs to update the audio selection interface according to the second audio data to prompt the user that the current display device 200 cannot play the target audio option with the target audio option, so that the user can replace other audio options according to the audio selection interface. For example, as shown in fig. 9, when the first media data only supports playing in spanish language, the target sound effect option is spanish language, and the corresponding sound effect selection interface also only includes spanish language option. When the playing of the first audio data is finished, the display device 200 switches to playing the second audio data according to the playing sequence of the media data, and at this time, the second audio options parsed by the controller 250 to obtain the second media data only include chinese options, so the display device 200 cannot continue playing the second audio data with spanish options. The controller 250 needs to update the sound effect selection interface according to the second sound effect option to cancel the original spanish language option, and newly display the chinese language option in the sound effect selection interface according to the second sound effect option to prompt the user that the second media data can select the chinese language option for playing.
It should be noted that, in the process of updating the sound effect selection interface by the controller 250, only one sound effect option is taken as an example for explanation of the sound effect selection interface. It should be appreciated that in an actual application scenario, the sound effect selection interface may include a plurality of sound effect options. Also, the sound effect options may include different types at the same time, for example, a language option, an ambient sound option, a channel play option, or a feature sound option, etc.
After updating the sound effect selection interface, the controller 250 may control the display 260 to display the updated sound effect selection interface so as to prompt the user to reselect the target sound effect option among the second sound effect options based on the updated sound effect selection interface.
In some embodiments, the display device 200 may respond to the target option sound effect by the operating system to cause the display device 200 to play the first audio data in accordance with the target option sound effect. The operating system comprises four layers, namely an application layer, a framework layer, a system library layer and a kernel layer. The framework layer may include a middleware layer, where the middleware layer includes a device middleware, and the device middleware is configured to transfer configuration information and data of media asset data when different sound effect options are switched, so as to validate the sound effect options. In the process of executing the sound effect configuration, only an application layer, a middleware layer and an application layer can be involved, so that the embodiment of the application only uses the application layer, the middleware layer and the application layer to carry out the detailed explanation of the sound effect configuration.
The local asset data may be traversed in memory before the first asset data is played by the display device 200 to detect the first asset code in the local asset data. The first media asset codes are encoded first media asset data, and the encoding format can improve the data transmission efficiency and save the storage space. When the first asset code is included in the local asset data, the controller 250 may perform decoding on the first asset code to obtain the first asset data.
If the first media asset code is not detected in the local media asset data, the controller 250 may establish a communication connection with the server 400 through the communication device 220, and send a media asset request to the server 400, so that the server 400 responds to the media asset request, and feeds back the url link of the first media asset code to perform online decoding on the first media asset code, thereby obtaining the first media asset data.
Fig. 10 is a bottom logic diagram of a display device generating an audio selection interface according to an embodiment of the present application. Referring to fig. 10, after decoding is performed, the first asset data is located at a kernel layer, and the controller 250 may perform audio parsing on the first asset data at the kernel layer to obtain audio scene information (Audio Scene Information, ASI), i.e., ASI information, of the first asset data, which is a kind of metadata information for describing audio scenes and audio objects.
The audio scene information may include sound effects, such as language, environmental sound, sound channel, etc., which may be supported by the first media data. Such as surround sound scenes, stereo scenes, etc., where the language scenes may include chinese scenes, english scenes, etc., the ambient sound scenes may include surround sound scenes, stereo scenes, etc., and the channel scenes may include left channel, right channel, two channel, etc.
The controller 250 may generate first sound options of the first media data according to the audio scene information and transmit the first sound options to the device middleware in an information format. The equipment middleware generates sound effect state information according to the first sound effect option, and sends the sound effect state information to the application layer so as to generate a sound effect selection interface at the application layer according to the sound effect state information, and display the sound effect selection interface at the user interface.
In some embodiments, the first sound effect option may include a plurality of options, wherein each option may further include a plurality of sub-options. Taking the language option as an example, the language option may include a plurality of sub-options, for example, a chinese language option, an english language option, and the like, which may be referred to as sub-options of the language option. The first sound effect option may also include an initial sound effect option. When the display device 200 plays the first media data, if the sound effect selection instruction is not received, the controller 250 may play the first audio data according to the initial sound effect option.
In the sound effect selection interface, the initial sound effect option is selected by default by the focus, and the controller 250 may move the focus according to the target sound effect option so that the focus is located at the target sound effect option in response to the sound effect selection instruction. In generating the sound effect selection interface, the controller 250 may parse the sound effect configuration information in the sound effect scene information based on the first sound effect option. Wherein the sound effect configuration information includes current configuration information and optional configuration information. The current configuration information is used to characterize the audio options when the first media data is being played, for example, as shown in fig. 11, when the display device 200 is playing the first audio data in the chinese language option, the current configuration information of the language option is the chinese audio option. The optional configuration information is used to characterize other sound effect options of the media asset data besides the current configuration information, for example, the first media asset data may support english options and spanish options in addition to chinese sound effect options, and at this time, the optional configuration information may include english options and spanish options.
After acquiring the current configuration information and the optional configuration information, the controller 250 may generate an audio selection interface according to the current configuration information and the optional configuration information. The initial sound effect options can be generated according to the current configuration information, and other first sound effect options besides the initial sound effect options can be generated according to the selectable configuration information. At this time, the focus is located at the initial sound effect option to prompt the user that the display device 200 is playing the first media asset data at the initial sound effect option. The user may input a sound effect selection instruction to select a target sound effect option from the other first sound effect options to replace the sound effect option. The user may also input an audio determining instruction to control the display device 200 to continue playing the first media asset data with the initial audio option.
In some embodiments, the first subtitle data includes first subtitle data, and the controller 250 may synchronously display the first subtitle data when controlling the display 260 to display a media asset picture corresponding to the media asset data, so that the user can view the media asset picture and the first subtitle data at the same time. Accordingly, when the controller 250 controls the audio output device 270 to play the first audio data with the target audio option, the subtitle data may also be replaced according to the target audio option.
In the process of replacing the subtitle data, the controller 250 may detect the target sound effect option, and if the target sound effect option is a language option, the controller 250 may parse the target language information in the target sound effect option. After the target language information is obtained, the controller 250 may obtain the target subtitle data according to the target language information, for example, if the target language information is chinese language information, the controller 250 may obtain the chinese subtitle data according to the target language information, and if the target language information is english language information, the controller 250 may obtain the english subtitle data according to the target language information.
Before the sound effect selection instruction is not responded, the controller 250 controls the display 260 to display the subtitle data of the initial language information in the initial sound effect option, and at this time, the first subtitle data is the subtitle data of the initial language information. As shown in fig. 12, after obtaining the target subtitle data, the controller 250 may replace the subtitle data of the initial language information with the target subtitle data, thereby causing the display 260 to display the target subtitle data and synchronizing the first subtitle data with the language of the first audio data.
The controller 250 may also acquire corresponding audio data according to the target language information while replacing the subtitle data. For example, when the target language information is chinese language information, the controller 250 may retrieve first audio data of the chinese language according to the chinese language information and replace the first audio data of the initial language information with the first audio data of the chinese language to update the first audio data.
After updating the subtitle data and the first audio data according to the target audio option, in order to alleviate the problem that the playing time sequence of the first audio data does not coincide with the display time sequence of the target subtitle data, the controller 250 may align the time sequence relationship between the target subtitle data and the updated first audio data after obtaining the target subtitle data and updating the first audio data, synchronously control the audio output device 270 to play the first audio data according to the time sequence relationship, and control the display 260 to display the target subtitle data so as to improve the sound-picture synchronism of the display device 200 playing the first media data.
In some embodiments, as shown in fig. 13, after responding to the sound effect selection instruction, the controller 250 may parse the target sound effect option at the application layer to obtain the target configuration information. After obtaining the target configuration information, the controller 250 needs to send the target configuration information to the kernel layer to perform configuration update according to the target sound effect option.
To this end, the controller 250 may transmit the target configuration information to the middleware layer through the first interface. The first interface is used for connecting the application layer and the middleware layer. After the target configuration information is sent to the middleware layer, the configuration information can be sent to the kernel layer through the second interface, and the sound effect scene information of the first media data is updated based on the target configuration information in the kernel layer.
For example, when the display device 200 is playing the first media data with the surround sound option, the sound effect selection instruction selects the target sound effect option as the stereo sound option, and the controller 250 may change the current configuration information from the surround sound option to the stereo sound option at the kernel layer to update the sound effect scene information of the first media data. After the update is completed, the controller 250 may transmit the sound effect scene information from the kernel layer to the application layer to acquire updated first audio data.
In updating the sound effect scene information, the controller 250 may acquire target configuration information including a behavior event (action event) to perform parsing. The action event includes event parameters for updating the sound effect scene information, for example, five parameters, including uuid, version, actionType, paramInt and paramFloat, where uuid is an identification parameter of the media asset data, and is used for identifying the media asset data. Version is a Version information parameter of the sound effect option. actionType are action value parameters, which can be changed according to the sound effect type, and different sound effect types have different action value parameters. paramInt and paramFloat are used to represent the sound effect parameters, wherein paramInt represents the set desired value of the sound effect parameters and paramFloat is used to represent the floating value of the sound effect parameters. For example, when setting the sound effect parameter of stereo from 50 to 80, paramInt is 80, and the sound effect parameter floats up by 30, at which point paramFloat is "+30". Similarly, if the sound parameters float down 30, paramFloat is "-30".
The controller 250 may configure configuration parameters of the sound effect scene information according to the target configuration information, where the configuration parameters are sound effect parameters after the target sound effect option takes effect. The controller 250 may send the configuration parameters to the application layer to generate sound effect configuration information of the first sound effect data according to the configuration parameters, so as to configure the first audio data according to the sound effect configuration information, and complete updating of the first audio data.
In some embodiments, the controller 250 may obtain the information format of the sound effect scene information when the kernel layer parses the sound effect scene information. To facilitate information transfer, the sound effect scene information may be in xml format. The controller 250 may send the sound effect scene information to the application layer, and at the application layer, parse the sound effect scene information according to a parsing format corresponding to the xml format, such as a DOM parsing format, to filter redundant data in the sound effect scene information, so as to obtain the sound effect configuration information.
In some embodiments, after the display device 200 finishes playing the first media data, if the second audio option includes the target audio option, the controller 250 may obtain the second audio data and parse the audio scene information of the second audio data. At this time, the sound effect scene information of the second audio data is the sound effect scene information of the initial sound effect option. In order to control the audio output device 270 to continue playing the second audio data with the target audio option, the controller 250 may configure the audio scene information of the second audio data according to the target configuration information, and update the second audio data according to the configured audio scene information, so that the audio output device 270 plays the updated second audio data with the target audio option. If the sound effect scene information is the same as the target configuration information, which indicates that the initial sound effect option of the second audio data is the same as the target sound effect option, the controller 250 may directly control the audio output device 270 to play the second audio data.
In some embodiments, the user may input an audio selection instruction in the middle of playing the first media asset data or the second media asset data on the display device 200, and in order to adjust the audio according to the playing progress, the controller 250 may obtain the playing progress of the first media asset data, for example. When the configuration of the sound effect scene information of the first media asset data is completed according to the sound effect configuration information, the audio output device 270 can be controlled to continue playing the first media asset data with the target sound effect option based on the playing progress.
In some embodiments, the same sound options may exist for the first and second media asset data when updating the sound selection interface. Accordingly, the controller 250 may traverse the second sound effect option to obtain the sound effect option to be deleted and the newly added sound effect option. The new added sound effect options are sound effect options except the first sound effect options in the second sound effect options, and the sound effect options to be deleted are the first sound effect options contained in the second sound effect options. The controller 250 may update the sound effect selection interface according to the sound effect option to be deleted and the new sound effect option.
For example, when the first sound effect option is a chinese language option, an english language option, and a stereo sound option, the sound effect selection interface may include three sound effect options of a chinese language option, an english language option, and a stereo sound option. If the second sound effect option is three sound effect options of Chinese language options, spanish language options and surround sound options, the first sound effect option and the second sound effect option both comprise the same Chinese language options, and the Chinese language options are reserved sound effect options. In the first sound effect option, the sound effect option to be deleted is an English language option and a stereo sound option, and in the second sound effect option, the new added sound effect option is a Spanish language option and a surround sound option. When updating the sound effect selection interface, the controller 250 may delete the english options and the stereo options in the sound effect selection interface, and add spanish options and surround options in the sound effect selection interface, so as to complete the updating of the sound effect selection interface.
Some embodiments of the present application further provide an audio configuration method, which is applied to the display apparatus 200 provided in the foregoing embodiments, where the display apparatus 200 should include at least a display 260, an audio output device 270, and a controller 250. Wherein the display 260 is configured to display a user interface. The audio output device 270 is configured to play audio data. Wherein the method comprises the following steps:
and S100, when the first media data are played, analyzing first sound effect options of the first media data, and controlling the display to display a sound effect selection interface.
Wherein the sound effect selection interface comprises the first sound effect option, and the first media asset data comprises first audio data.
And S200, responding to an audio selection instruction input by a user based on the audio selection interface, and controlling the audio output device to play the first audio data with a target audio option.
The target sound effect options are first sound effect options appointed by the sound effect selection instruction.
S300, analyzing a second sound effect option of the second media data.
The second media data is the next media data adjacent to the first media data in the media play list, and the second media data comprises second audio data.
And S400, if the second sound effect option comprises the target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option.
And S500, if the second sound effect option does not comprise the target sound effect option, updating a sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
As can be seen from the above technical solutions, some embodiments of the present application provide a display device and an audio configuration method, where the audio configuration method can analyze a first audio option of first media data when playing the first media data, and control a display to display an audio selection interface including the first audio option. And controlling the audio output device to play the first audio data with the target sound effect option in response to a sound effect selection instruction based on the sound effect selection interface by the user. Analyzing a second sound effect option of the second media data, if the second sound effect option comprises a target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option, if the second sound effect option does not comprise the target sound effect option, updating the sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
The same and similar parts of the embodiments in this specification are referred to each other, and are not described herein.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. The illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
A display configured to display a user interface;
an audio output device configured to play audio data;
A controller configured to:
when first media data are played, analyzing first sound effect options of the first media data, and controlling the display to display a sound effect selection interface, wherein the sound effect selection interface comprises the first sound effect options, and the first media data comprise first audio data;
Responding to an audio selection instruction input by a user based on the audio selection interface, controlling the audio output device to play the first audio data in a target audio option, wherein the target audio option is a first audio option designated by the audio selection instruction;
Analyzing a second sound effect option of second media data, wherein the second media data is next media data adjacent to the first media data in a media play list, and the second media data comprises second audio data;
If the second sound effect option comprises the target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option;
And if the second sound effect option does not comprise the target sound effect option, updating a sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
2. The display device of claim 1, wherein the controller performs a first sound option that parses the first media asset data, specifically configured to:
Decoding the first media asset code to obtain first media asset data;
analyzing the first media asset data to obtain sound effect scene information of the first media asset data;
and generating the first sound effect option according to the sound effect scene information.
3. The display device of claim 2, wherein before the controller performs the step of controlling the display to display an audio selection interface, the controller is further configured to:
Analyzing sound effect configuration information in the sound effect scene information based on the first sound effect option, wherein the sound effect configuration information comprises current configuration information and optional configuration information;
And generating a sound effect selection interface according to the current configuration information and the optional configuration information.
4. The display device of claim 1, wherein the first media data further comprises first subtitle data, the controller performing control of the audio output means to play the first audio data with a target sound effect option, specifically configured to:
Analyzing the target sound effect options to obtain target language information;
Acquiring target subtitle data according to the target language information, and updating the first audio data according to the target language information;
aligning the time sequence relation of the target caption data and the updated first audio data;
and according to the time sequence relation, controlling the audio output device to play the first audio data, and controlling the display to display the target subtitle data.
5. The display device of claim 1, wherein the controller performs controlling the audio output means to play the first audio data with a target sound effect option, specifically configured to:
Analyzing the target sound effect options at an application layer to obtain target configuration information;
the target configuration information is sent to a middleware layer through a first interface, and the first interface is used for connecting the application layer and the middleware layer;
the target configuration information is sent to a kernel layer through a second interface;
at the kernel layer, updating the sound effect scene information of the first media asset data based on the target configuration information;
and sending the updated sound effect scene information from the kernel layer to the application layer to acquire updated first audio data.
6. The display device of claim 5, wherein the controller performs controlling the audio output means to play the second audio data with the target sound effect option, specifically configured to:
Acquiring the second audio data;
analyzing the sound effect scene information of the second audio data;
configuring sound effect scene information of the second audio data according to the target configuration information;
and updating the second audio data according to the configured sound effect scene information, and controlling the audio output device to play the updated second audio data.
7. The display device of claim 5, wherein the controller performs sending the updated sound scene information from the kernel layer to the application layer, and is specifically configured to:
acquiring configuration parameters of the sound effect scene information;
Sending the configuration parameters to the application layer;
and generating sound effect configuration information of the first audio data according to the configuration parameters so as to update the first audio data.
8. A display device according to claim 3, wherein the initial position of the sound effect scene information is a kernel layer, and the controller is configured to parse sound effect configuration information in the sound effect scene information:
acquiring an information format of the sound effect scene information at the kernel layer;
transmitting the sound effect scene information to the application layer;
and at the application layer, analyzing the sound effect scene information according to the analysis format corresponding to the information format to obtain sound effect configuration information.
9. The display device of claim 1, wherein the controller performs updating a sound selection interface according to the second sound option, specifically configured to:
Traversing the second sound effect options to obtain sound effect options to be deleted and new sound effect options, wherein the new sound effect options are sound effect options except the first sound effect options in the second sound effect options;
and updating the sound effect selection interface according to the sound effect option to be deleted and the new sound effect option.
10. A sound effect configuration method, applied to the display device of any one of claims 1 to 9, the display device comprising a display configured to display a user interface, an audio output device configured to play audio data, and a controller, the method comprising:
when first media data are played, analyzing first sound effect options of the first media data, and controlling the display to display a sound effect selection interface, wherein the sound effect selection interface comprises the first sound effect options, and the first media data comprise first audio data;
Responding to an audio selection instruction input by a user based on the audio selection interface, controlling the audio output device to play the first audio data in a target audio option, wherein the target audio option is a first audio option designated by the audio selection instruction;
Analyzing a second sound effect option of second media data, wherein the second media data is next media data adjacent to the first media data in a media play list, and the second media data comprises second audio data;
If the second sound effect option comprises the target sound effect option, controlling the audio output device to play the second audio data with the target sound effect option;
And if the second sound effect option does not comprise the target sound effect option, updating a sound effect selection interface according to the second sound effect option, and controlling the display to display the updated sound effect selection interface.
CN202411116832.XA 2024-08-14 2024-08-14 A display device and a sound effect configuration method Pending CN119166094A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411116832.XA CN119166094A (en) 2024-08-14 2024-08-14 A display device and a sound effect configuration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411116832.XA CN119166094A (en) 2024-08-14 2024-08-14 A display device and a sound effect configuration method

Publications (1)

Publication Number Publication Date
CN119166094A true CN119166094A (en) 2024-12-20

Family

ID=93880996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411116832.XA Pending CN119166094A (en) 2024-08-14 2024-08-14 A display device and a sound effect configuration method

Country Status (1)

Country Link
CN (1) CN119166094A (en)

Similar Documents

Publication Publication Date Title
CN111277884B (en) Video playing method and device
US11425466B2 (en) Data transmission method and device
CN111897478A (en) Page display method and display equipment
CN112351334B (en) File transmission progress display method and display equipment
CN113259741A (en) Demonstration method and display device for classical viewpoint of episode
CN113242444A (en) Display device, server and media asset playing method
CN112203154A (en) Display device
CN119166094A (en) A display device and a sound effect configuration method
CN115802112A (en) Display device, channel data processing method, and storage medium
CN113645492A (en) Display device and synchronization method of history play records
CN112367550A (en) Method for realizing multi-title dynamic display of media asset list and display equipment
CN113473220A (en) Automatic sound effect starting method and display equipment
CN112261463A (en) Display device and program recommendation method
CN113825007B (en) Video playing method and device and display equipment
CN114363679B (en) Display equipment, server and media asset playing method
CN116137667B (en) Server, display equipment and media asset migration method
CN120166249A (en) A display device, storage server and method for continuing to broadcast screen content
CN119233024A (en) Display device and IP channel playing method
CN119182946A (en) Display device and media resource screen positioning method
CN120568123A (en) Display device and application starting method
CN120711246A (en) Display device, server and media asset content profile display method
CN119767067A (en) Display device and display method
CN119364124A (en) Display device and video loop playback method
CN119906870A (en) Display device and subtitle display method
CN120676207A (en) Voice result display method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载