+

CN119155335A - Intelligent well lid data analysis method and device based on Internet of things and storage medium - Google Patents

Intelligent well lid data analysis method and device based on Internet of things and storage medium Download PDF

Info

Publication number
CN119155335A
CN119155335A CN202411650649.8A CN202411650649A CN119155335A CN 119155335 A CN119155335 A CN 119155335A CN 202411650649 A CN202411650649 A CN 202411650649A CN 119155335 A CN119155335 A CN 119155335A
Authority
CN
China
Prior art keywords
template
network
monitoring data
neural network
commonality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411650649.8A
Other languages
Chinese (zh)
Other versions
CN119155335B (en
Inventor
方惠蓉
沈炎松
施玉娟
陈绿苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuzhou Vocational and Technical College
Original Assignee
Chuzhou Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuzhou Vocational and Technical College filed Critical Chuzhou Vocational and Technical College
Priority to CN202411650649.8A priority Critical patent/CN119155335B/en
Publication of CN119155335A publication Critical patent/CN119155335A/en
Application granted granted Critical
Publication of CN119155335B publication Critical patent/CN119155335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B29/00Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
    • G08B29/18Prevention or correction of operating errors
    • G08B29/185Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
    • G08B29/186Fuzzy logic; neural networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B29/00Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
    • G08B29/18Prevention or correction of operating errors
    • G08B29/185Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
    • G08B29/188Data fusion; cooperative systems, e.g. voting among different detectors
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/35Utilities, e.g. electricity, gas or water
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/10Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Signal Processing (AREA)
  • Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Fuzzy Systems (AREA)
  • Development Economics (AREA)
  • Medical Informatics (AREA)
  • General Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Business, Economics & Management (AREA)
  • Toxicology (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method, a device and a storage medium for intelligent well lid data analysis based on the Internet of things, which are used for acquiring a first monitoring data set of each intelligent well lid and a second monitoring data set of a past intelligent well lid sequence of each sewer health state, carrying out data reasoning on a first network adjustment template based on an initial neural network to acquire reasoning confidence corresponding to a shielding part in the first network adjustment template, carrying out commonality measurement on the first network adjustment template and the first triggering adjustment template based on the initial neural network to acquire a first template commonality measurement result, carrying out adjustment on the initial neural network based on the reasoning confidence, the shielding part in the first network adjustment template and the first template commonality measurement result, and carrying out optimization on the initial neural network after adjustment based on a second network adjustment template to acquire an optimized initial neural network. The invention can improve the characterization quality of the intelligent well lid feature vector representation, thereby improving the accuracy of triggering early warning.

Description

Intelligent well lid data analysis method and device based on Internet of things and storage medium
Technical Field
The invention relates to the technical field of data processing, in particular to an intelligent well lid data analysis method, device and storage medium based on the Internet of things.
Background
With the continuous advancement of smart city construction, smart well covers are becoming more and more important as an important component of city infrastructure, and their intelligent management. The intelligent well lid monitors the well lid state and surrounding environment parameters such as water level, harmful gas concentration, noise level and the like in real time through built-in sensors, and provides abundant data support for city managers. However, how to effectively process and analyze these massive monitoring data, accurately predict and early warn of sewer health status, remains a major challenge facing current smart city construction. In the prior art, traditional data mining and machine learning methods are mostly adopted for processing and analyzing intelligent well lid monitoring data. When the method is used for processing large-scale and high-dimensional monitoring data, the problems of low calculation efficiency and low prediction accuracy are often caused. Particularly, when processing sequence data, the traditional method is difficult to capture time dependency relationship and potential modes among the data, so that the early warning effect is poor.
Disclosure of Invention
In view of the above, the invention provides a method, a device and a storage medium for intelligent well lid data analysis based on the Internet of things. The technical scheme of the invention is realized as follows:
In one aspect, the invention provides a smart well lid data analysis method based on the Internet of things, which comprises the steps of obtaining a first monitoring data set corresponding to each smart well lid in a smart well lid networking, obtaining a second monitoring data set related to past smart well lid sequences in each sewer health state set based on the first monitoring data set, constructing a first network alignment template based on the second monitoring data set, carrying out data inference on the first network alignment template according to an initial neural network to obtain an inference confidence degree corresponding to a shielding part in the first network alignment template, wherein the first network alignment template comprises a triggering alignment template and an un-triggering alignment template, carrying out a common metric on the first network alignment template and the first triggering alignment template corresponding to the first network alignment template based on the initial neural network, obtaining a first template common metric result based on the inference confidence degree, the shielding part in the first network alignment template, carrying out data inference on the first network alignment template to obtain a second neural network alignment template based on the first template, carrying out a second neural network based on the initial neural network alignment template and the second neural network, carrying out a second neural network based on the first triggering alignment template and the initial neural network alignment template, and obtaining an optimized initial neural network, wherein the optimized initial neural network is used for generating initial coding characteristics of the intelligent well lid.
In another aspect, the invention provides a computer apparatus comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing the steps of the method described above when the program is executed.
In yet another aspect, the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the preceding claims.
The intelligent well lid system has the beneficial effects that the first monitoring data set corresponding to each intelligent well lid in the intelligent well lid networking is obtained, the first monitoring data sets corresponding to the intelligent well lids covered in the past intelligent well lid sequence of each sewer health state are combined to obtain the second monitoring data set corresponding to the past intelligent well lid sequence, and the first network adjusting template for adjusting the initial neural network and the second network adjusting template for optimizing the initial neural network after adjustment are generated based on the second monitoring data sets. And according to the first network adjustment template, adjusting the initial neural network based on a parallel mode of data reasoning and commonality measurement to obtain the adjusted initial neural network. And optimizing the adjusted initial neural network based on the mode of the commonality measurement according to the second network adjustment template to obtain an optimized initial neural network. The initial neural network after optimization can generate an initialized intelligent well lid feature vector representation for state triggering early warning, and the feature of the past intelligent well lid sequence is mined by combining the capability of the initial neural network, so that the characterization quality of the intelligent well lid feature vector representation is improved, and the triggering early warning accuracy is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic implementation flow chart of an intelligent well lid data analysis method based on the internet of things according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention will be further elaborated with reference to the accompanying drawings and examples, which should not be construed as limiting the invention, but all other embodiments which can be obtained by one skilled in the art without making inventive efforts are within the scope of protection of the present invention.
The embodiment of the invention provides an intelligent well lid data analysis method based on the Internet of things, which can be executed by a processor of a computer device. The computer device may refer to a device with data processing capability, such as a server, a notebook computer, a tablet computer, a desktop computer, and a mobile device. Fig. 1 is a schematic implementation flow chart of a smart well lid data analysis method based on the internet of things, which is provided by the embodiment of the invention, as shown in fig. 1, and includes the following steps:
Step S100, acquiring a first monitoring data set corresponding to each intelligent well lid in the intelligent well lid networking, and acquiring a second monitoring data set associated with a past intelligent well lid sequence of each sewer health state in the sewer health state set based on the first monitoring data set.
Step S100 collects and processes a large amount of monitoring data from the smart well lid network to construct a base data set for subsequent analysis. The computer device acquires a first monitoring data set corresponding to each intelligent well lid in the intelligent well lid networking. The intelligent well lid networking is a network composed of a plurality of intelligent well lids distributed at different geographic positions, and each intelligent well lid is provided with various sensors for monitoring various parameters of the well lid and surrounding environment in real time. These parameters include, but are not limited to, water flow monitoring data (e.g., water level height, water flow rate), harmful gas monitoring data (e.g., hydrogen sulfide concentration, methane concentration), and noise monitoring data (e.g., abnormal sounds under the manhole cover, pipe leakage sounds, etc.).
Taking water flow monitoring data as an example, a smart well lid may report water level data of [0.5 meter, 0.6 meter, 0.7 meter, ], representing water level heights measured at different points in time. Likewise, the harmful gas monitoring data may include a time series of [5ppm, 6ppm, 5ppm, ] of hydrogen sulfide concentration, indicating a change in hydrogen sulfide concentration over time. The noise monitoring data may then be sampled values of a series of audio signals representing the noise level of the environment in the vicinity of the manhole cover.
The computer device fuses the different types of data according to the intelligent well lid to form a comprehensive first monitoring data set. For each smart well lid, examples of this set are smart well lid 1 { water flow monitoring data } [0.5, 0.6, 0.7..], harmful gas monitoring data } [5, 6, 5.], noise monitoring data } [ audio_sample1, audio_sample2,.]; the intelligent well lid 2 { water flow monitoring data } [0.4, 0.5, 0.6.], harmful gas monitoring data } [4, 5, 4.], noise monitoring data } [ audio_sample3, audio_sample4, ].
Next, the computer device further obtains a past sequence of intelligent well lids associated with each of the set of sewer health states based on the first set of monitoring data. The sewer health status collection may contain various status, such as "normal", "slightly plugged", "severely plugged", "gas leaking", etc. Each state corresponds to a set of smart well lid sequences that have triggered the state in the past. These sequences are time ordered, recording the intelligent well lid triggering a specific health state at different time points.
For example, for the "slightly plugged" condition, the past smart well lid sequence may include smart well lid a, smart well lid B, and smart well lid C, which trigger the "slightly plugged" condition at time points T1, T2, and T3, respectively. The triggering state means that the manhole covers detect abnormal data such as abnormal rise of water level, exceeding of concentration of harmful gas or abnormal noise at corresponding time points. The computer device finds these smart well covers that triggered a particular health state and gathers a first set of monitoring data as they triggered the state. For the "slight blockage" condition, assuming that the smart well lid a is triggered at time T1, the computer device will extract the first monitoring data of the smart well lid a around time T1. These data will form part of a second set of monitoring data associated with a "light occlusion" condition.
To explain more specifically, it is assumed that the first monitoring data of the smart well lid a at the time point T1 is as follows { water flow monitoring data } [0.8, 0.9, 1.0, ], harmful gas monitoring data } [7, 8, 9, ], noise monitoring data } [ audio_sample5, audio_sample6, ]
The computer means will similarly collect the first monitoring data of the smart well lid B at time T2 and the smart well lid C at time T3 and combine these data with the data of the smart well lid a into a complete second monitoring data set dedicated to the analysis of the "slight blockage" condition.
This process is repeated for each of the sets of sewer health conditions, resulting in a plurality of second sets of monitored data, each set corresponding to a particular sewer health condition. These sets will serve as the basis for subsequent analysis to help the computer device understand and predict the occurrence laws and triggers for different health states.
In constructing the second monitoring dataset, the computer apparatus may also apply some data preprocessing techniques to improve the data quality. For example, for noise monitoring data, the computer device may use audio processing techniques to extract key features (e.g., audio energy, spectral distribution, etc.) to better characterize the abnormal sounds under the manhole cover. For water flow and harmful gas monitoring data, the computer device may perform denoising, smoothing or normalizing processes to reduce the effects of measurement errors and outliers.
And step 200, constructing a first network adjusting template based on the second monitoring data set, and carrying out data reasoning on the first network adjusting template according to the initial neural network to obtain a reasoning confidence degree corresponding to a shielding part in the first network adjusting template, wherein the first network adjusting template comprises a triggered adjusting template and an un-triggered adjusting template.
Step S200 constructs a first network tuning template using the second monitoring dataset and performs data reasoning on this template over the initial neural network to evaluate the model' S predictive ability to unknown (or masked) data. First, the computer device constructs a first network tuning template based on the second monitoring data set. The template is composed of a plurality of data segments, and each data segment corresponds to monitoring data of a smart well cover when the health state of the sewer is triggered or not triggered at a specific time point. These data segments are called tuning templates, which together form the basis of the first network tuning template.
The first network tuning templates comprise two types of tuning templates, a triggered tuning template and an un-triggered tuning template. The trigger calibration template consists of pieces of data of intelligent well covers that have triggered specific sewer health conditions in the past. For example, if a smart manhole cover triggers a "light blockage" condition at time T1, then the monitoring data (including water flow, harmful gas and noise monitoring data) of the manhole cover at time T1 is used as part of the trigger calibration template. The non-triggered calibration templates consist of pieces of data of intelligent well covers that have not triggered any sewer health status in the past. These pieces of data provide background information about "normal" or "non-abnormal" states, helping the neural network learn how to distinguish between normal and abnormal states. In constructing the first network tuning template, the computer device may perform some preprocessing on the data segments to improve the quality and consistency of the data. For example, it may normalize the data such that the values of the different monitored parameters are on the same order of magnitude, or smooth the data to reduce the effects of noise and outliers.
Next, the computer device loads the first network tuning template into the initial neural network and infers the data in the template. The initial neural network is a pre-trained model that has certain data processing and pattern recognition capabilities. In this step, the neural network is tasked with predicting the mask portion in the first network tuning template.
The mask portion refers to those data segments that are deliberately hidden or not provided. For example, in a trigger calibration template, the computer device may intentionally conceal a portion of the water flow monitoring data or noise monitoring data and then let the neural network predict the concealed data based on the remaining data. This process is similar to giving a neural network a "gap-filling" that lets it infer missing parts from the context information.
To evaluate the predictive power of the neural network, the computer device calculates an inference confidence. The inference confidence is an index for measuring the prediction accuracy of the neural network, and represents the confidence level of the neural network on the prediction result. The higher the inference confidence, the more reliable the prediction of the neural network, and conversely, the greater the uncertainty of the prediction.
There are a number of ways to calculate the confidence level of an inference, one common way being to use a softmax function. The softmax function may convert the output of the neural network into a probability distribution, where each probability value represents the confidence of the corresponding category. For the predictive task of the mask portion, the softmax function may help the computer device calculate the predictive probability of the neural network for each possible data value and select the value with the highest probability as the predictive result.
It is assumed that the neural network predicts the water flow monitoring data of a certain mask portion and outputs a vector containing a plurality of predicted values. This vector can be expressed as [0.1, 0.2, 0.7, 0.0]. Wherein each value represents a predicted probability of a corresponding water flow monitoring data value. In this example, 0.7 is the maximum value, so the computer means will select the data value corresponding to 0.7 as the prediction result and 0.7 as the inference confidence. In some cases, the neural network may give erroneous predictions and high confidence due to data noise, model overfitting, or other reasons. Therefore, in practical applications, the computer device generally combines a plurality of indexes (such as inference confidence, cross-validation result, etc.) to comprehensively evaluate the performance of the neural network, and make necessary adjustments and optimizations.
And step 300, carrying out commonality measurement on the first network adjusting template and a first triggering adjusting template corresponding to the first network adjusting template based on the initial neural network to obtain a first template commonality measurement result.
Step S300 uses the initial neural network to measure the commonality of the first network calibration template and the corresponding first trigger calibration template to evaluate the similarity and the relevance between them.
The initial neural network, which is a pre-trained model, has certain data processing and pattern recognition capabilities. In step S200, data reasoning has been performed on the mask portion in the first network tuning template using this model.
A first network calibration template, which is a set of a plurality of data segments, each data segment corresponding to monitoring data of a smart well lid when the health state of the sewer is triggered or not triggered at a specific time point. These data segments are used as inputs to the neural network for subsequent data reasoning and commonality metrics.
A first trigger calibration template, which is another template associated with the first network calibration template, consisting of pieces of data of intelligent well lids triggered by a specific sewer health status. The first trigger tone template is more focused on well lid data that has triggered a health condition than the first network tone template. In step S300, the computer device first loads a first network tuning template and a first trigger tuning template into the initial neural network. It then uses a neural network to perform feature extraction on these templates, converting the original high-dimensional data into a low-dimensional feature vector representation. The feature vectors can capture key information in the data, and simultaneously reduce the dimension of the data, so that subsequent calculation and analysis are facilitated. The process of feature extraction maps the raw data into a new feature space. In this new feature space, similar data points are clustered together to form different feature clusters. These feature clusters represent different categories or patterns in the data.
Next, the computer means performs a commonality measure on the extracted feature vector. A commonality measure is a method of evaluating similarity between two or more data sets. In this context, it is used to evaluate the association between the first network tuning template and the first trigger tuning template.
There are a number of specific ways of commonality measurement, one common way of which is to calculate the cosine similarity between two feature vectors. Cosine similarity is an index for measuring the similarity of two vector directions, and the value range of the cosine similarity is between [ -1, 1]. When the directions of the two vectors are completely identical, the cosine similarity is 1, when the directions of the two vectors are completely opposite, the cosine similarity is-1, and when the two vectors are orthogonal, the cosine similarity is 0.
In practical applications, the computer device may calculate cosine similarity between pairs of feature vectors, and take the average value as the first template commonality measurement result. This result can reflect the overall similarity of the first network tuning template and the first trigger tuning template in the feature space.
In addition to cosine similarity, other methods may be used for commonality metrics such as euclidean distance, manhattan distance, pearson correlation coefficient, etc. These methods have advantages and disadvantages and are suitable for different application scenarios and data types. In practice, the computer device will choose the most appropriate commonality measure according to the circumstances.
Through the commonality measurement process of step S300, the computer device is able to evaluate the similarity and association between the first network tuning template and the first trigger tuning template. This information is critical to subsequent model tuning and optimization. If the commonality metric between two templates is high, indicating that the data distribution and pattern between them is more consistent, this may mean that the neural network has learned the characteristics of the data better. Conversely, if the commonality metric results are low, further adjustments to the parameters or structure of the neural network may be required to improve its performance.
And step 400, adjusting the initial neural network based on the reasoning confidence level, the shielding part in the first network adjusting template and the first template commonality measurement result to obtain the adjusted initial neural network.
Step S400 is to calibrate the initial neural network based on a plurality of key indexes to improve the capability of processing the intelligent well lid monitoring data. In step S200, the computer device performs data inference on the mask portion in the first network tuning template using the initial neural network, and calculates an inference confidence. The inference confidence is an important index for measuring the prediction accuracy of the neural network to the unknown data, and reflects the confidence level of the neural network to the prediction result. In step S300, the computer device performs a commonality measurement on the first network calibration template and the first trigger calibration template, and obtains a first template commonality measurement result. The result measures the similarity and the relevance of the two templates in the feature space, and provides important reference information for the adjustment of the neural network. In step S400, the computer apparatus first considers the inference confidence and the first template commonality measurement. If the inference confidence is higher, it means that the neural network predicts the mask more accurately, which generally means that the parameter setting of the neural network is more reasonable and no great adjustment is needed. However, if the inference confidence is low, it may mean that the neural network has difficulty in processing certain types of data, and that its parameters need to be calibrated. At the same time, the first template commonality measurement provides important information. If the commonality measure results are higher, it is indicated that the distribution of the first network tuning template and the first trigger tuning template in the feature space is more consistent, which usually means that the neural network has better learned the features of these data. However, if the commonality metric results are low, this may mean that the neural network has a bias in processing certain types of data, requiring fine-tuning of its parameters to improve performance.
Based on these considerations, the computer device will tune the initial neural network. The tuning process typically includes the following steps:
1. And (3) parameter adjustment, wherein the computer device adjusts the parameters of the neural network according to the reasoning confidence and the commonality measurement result. This includes setting of super parameters such as learning rate, weight decay, batch size, etc., and updating of weights of various layers of the neural network. By adjusting these parameters, the computer device can change the training process of the neural network, so that the training process is better suitable for the characteristics of the intelligent well lid monitoring data.
2. Weight updating, namely updating the weight of the neural network by the computer device on the basis of parameter adjustment. This is typically accomplished by a back-propagation algorithm that updates the weights of the neural network layer by layer based on the prediction error and gradient information of the neural network. Through multiple iterations of this process, the neural network can learn gradually the intrinsic laws and features of the data.
3. Model verification, in which, the computer device can verify the performance of the neural network periodically during the tuning process. This is typically accomplished by taking a portion of the data as a validation set, which does not participate in the training process of the neural network. By calculating performance metrics (e.g., accuracy, recall, F1 score, etc.) on the validation set, the computer device may evaluate the generalization ability of the neural network and adjust the tuning strategy accordingly.
4. Calibration stop conditions to avoid overfitting and wasted computing resources, the computer device sets some calibration stop conditions. For example, when the performance index on the verification set is not improved, or when the tuning process reaches a preset iteration number, the computer device stops the tuning process and stores the neural network model after tuning.
During tuning, the computer device may also apply some advanced techniques to optimize the performance of the neural network. For example, it may use regularization techniques to prevent overfitting, or dropout techniques to reduce the dependence of the neural network on specific data points. In addition, the pre-trained neural network model can be migrated to the intelligent well lid data analysis task by utilizing a migration learning technology, so that the training process is accelerated and the performance of the model is improved.
Through the tuning process of step S400, the computer device can improve the performance of the initial neural network, so that the computer device is better suitable for the characteristics of intelligent well lid monitoring data. The neural network model after adjustment has stronger generalization capability and higher prediction accuracy, and provides powerful support for subsequent intelligent well lid data analysis tasks.
And S500, constructing a second network adjusting template based on the second monitoring data set, and carrying out commonality measurement on the second network adjusting template and a second triggering adjusting template corresponding to the second network adjusting template based on the adjusted initial neural network to obtain a second template commonality measurement result, wherein the second network adjusting template does not have the triggering adjusting template.
Step S500 further refines the optimization process for the neural network, and evaluates and optimizes the initial neural network after tuning by constructing a second network tuning template and performing a commonality metric. The subject of this step is still a computer device that utilizes complex data processing and machine learning techniques to accomplish the task.
In step S400, the computer device performs tuning on the initial neural network based on the inference confidence level, the masking portion in the first network tuning template, and the first template commonality measurement result, to obtain a neural network model with better performance. This tuned neural network will now be used to construct and optimize a second network tuning template.
At the beginning of step S500, the computer device first builds a second network tuning template based on the second set of monitoring data. Unlike the first network calibration template, the second network calibration template does not contain trigger calibration templates, i.e., it does not contain smart well lid data that has triggered a particular sewer health status. In contrast, the second network tuning template focuses on those intelligent well lid data that did not trigger the health status, and their monitored data changes at different time points.
The process of constructing the second network tuning template is similar to that of the first network tuning template, but with a different emphasis. The computer device extracts intelligent well lid data fragments which do not trigger health status from the second monitoring data set, and organizes the data fragments according to time sequence. It will then incorporate into these data fragments indicator marks that indicate the beginning of the data sequence or the location of the particular data, thereby helping the neural network to better understand the data structure and context information.
The computer device loads the second network tuning template into the tuned initial neural network, and performs feature extraction on the data in the template. Feature extraction is an important step in machine learning, which helps the neural network to better understand and process the data by converting the raw data into more meaningful feature vectors. In this process, the computer device may use a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or other type of neural network structure to extract spatial or temporal features of the data.
After feature extraction, the computer device performs a commonality measure on the second network tuning template and the corresponding second trigger tuning template. The second trigger tone template is constructed based on the second network tone template, but it contains smart well lid data fragments of the trigger health associated with the second network tone template. These data fragments, while not directly contained in the second network tuning template, have potential relevance and similarity to the data in the template.
A commonality measure is a process of evaluating the similarity between two data sets. In step S500, the computer device may calculate the similarity between the second network tuning template and the second trigger tuning template using cosine similarity, euclidean distance, pearson correlation coefficient, or other metrics. These metrics have advantages and disadvantages and are suitable for different application scenarios and data types. In practice, the computer device may calculate cosine similarity between pairs of feature vectors and take the average value as the second template commonality measurement result. This result can reflect the overall similarity of the second network tuning template and the second trigger tuning template in the feature space. After obtaining the second template commonality measurement result, the computer means optimizes the tuned initial neural network based on the result. The optimization aims to further improve the understanding and processing capacity of the neural network on intelligent well lid data of the non-triggered health state. This may involve adjusting parameters, structures, or training methods of the neural network.
In particular, the computer device may adjust the weights and bias terms of the neural network based on the second template commonality metric result to better fit the data distribution in the second network tuning template. At the same time, it is also possible to improve the architecture and training process of the neural network by adding additional hidden layers, changing activation functions or adjusting learning rates, etc.
In addition, the computer device may also utilize cross-validation techniques or the like to evaluate the optimized neural network performance. Cross-validation is a method of dividing a data set into subsets, training and validating separately, which can help computer devices evaluate the generalization ability of a neural network more accurately. Through the execution of step S500, the computer device can further optimize the calibrated initial neural network to perform better when processing the intelligent well lid data without triggering the health state. The neural network accuracy and robustness are improved, and more reliable support is provided for the follow-up intelligent well lid data analysis task.
And step 600, optimizing the adjusted initial neural network based on the second template commonality measurement result to obtain an optimized initial neural network, wherein the optimized initial neural network is used for generating initial coding features of the intelligent well lid.
Step S600 further optimizes the tuned initial neural network to generate more accurate and useful intelligent well lid initial coding features. The main execution body of the step is a computer device, and the neural network is finely adjusted by comprehensively applying the second template commonality measurement result and other related information, so that the capability of processing and analyzing intelligent well lid data is improved.
In step S500, the computer device has constructed a second network calibration template based on the second monitoring data set, and performs a commonality measurement on the second network calibration template and the second trigger calibration template, so as to obtain a second template commonality measurement result. The result reflects the similarity and the relevance of the second network tuning template and the second trigger tuning template in the feature space, and provides an important reference for further optimization of the neural network.
In step S600, the computer device optimizes the tuned initial neural network using the second template commonality measurement. The optimization aims to improve the understanding and processing capacity of the neural network on the intelligent well lid data, so that the intelligent well lid data can generate initial coding features of the intelligent well lid more accurately. These features will be used in subsequent data analysis and early warning tasks, providing powerful support for smart city management.
The optimization process generally includes the following steps:
1. And analyzing the second template commonality measurement result, namely firstly analyzing the second template commonality measurement result by the computer device, and knowing the similarity and the difference between the second network adjustment template and the second trigger adjustment template. This will help determine in which aspects the neural network performs well and which aspects need improvement.
2. And adjusting the parameters of the neural network, wherein the computer device can adjust the parameters of the neural network based on the analysis result. This may include setting of super parameters such as learning rate, weight decay, batch size, etc., and updating of weights for various layers of the neural network. The purpose of the adjustment parameter is to make the neural network adapt to the characteristics of wisdom well lid data better, improves its fitting ability and generalization ability to data.
3. Regularization techniques are introduced-to avoid overfitting, the computer device may introduce regularization techniques into the neural network. Regularization techniques prevent neural networks from over-learning noise and outliers in the training data by limiting their complexity. Common regularization techniques include L1 regularization, L2 regularization, dropout, and the like.
4. Using transfer learning, if the amount of data available is limited, the computer device may employ transfer learning techniques to accelerate the training process of the neural network and improve the performance of the model. Migration learning allows knowledge learned from one task to be migrated to another related task, thereby exploiting existing experience and knowledge to improve the performance of the new task.
5. Verification and testing computer means periodically verify and test the performance of the neural network during the optimization process. This typically involves using a portion of the data as a validation set and a test set to evaluate the accuracy and generalization ability of the neural network. If the performance on the validation set and the test set reaches the desired goal, the optimization process may stop, otherwise the computer device will continue to adjust the parameters and optimization strategies.
6. And generating initial coding features of the intelligent well lid, wherein the optimized neural network is used for generating the initial coding features of the intelligent well lid. These features are highly abstract and compressed representations of raw monitoring data that capture key information and patterns in the data. In generating the features, the computer device may calculate a feature vector representation of the input data using a forward propagation algorithm of the neural network.
The following is a specific example to illustrate the process of feature generation:
Let us assume a monitoring data set D of the smart well lid, which contains monitoring data of a plurality of smart well lids at different time points. The monitoring data of each intelligent well lid comprises water flow monitoring data, harmful gas monitoring data, noise monitoring data and other dimensions. After preprocessing and feature extraction, a feature matrix X can be obtained, wherein each row represents a feature vector of the intelligent manhole cover.
Now, the feature matrix X is input into the optimized neural network. The neural network transforms and processes the eigenvectors to ultimately output a low-dimensional eigenvector representation Y. The feature vector Y is the initial coding feature of the intelligent well lid, and captures key information and modes in the original monitoring data.
For example, if the original monitoring data has 100 dimensions (such as 10 parameters for water flow monitoring, 20 parameters for harmful gas monitoring and 70 parameters for noise monitoring), and the optimized neural network outputs a 10-dimensional feature vector Y, the feature vector is a highly compressed and abstract representation of the original monitoring data. It contains critical information in the original data while removing redundant and noisy information.
Through the execution of step S600, the computer device successfully performs further optimization on the adjusted initial neural network, and generates the initial coding feature of the intelligent well lid. The features provide powerful support for subsequent data analysis and early warning tasks, and bring convenience and benefit for management and maintenance of smart cities.
As an implementation manner, step S100, obtaining a first monitoring data set corresponding to each intelligent well lid in the intelligent well lid networking, and based on the first monitoring data set, obtaining a second monitoring data set associated with a past intelligent well lid sequence of each sewer health state in the sewer health state set, includes:
step S110, acquiring water flow monitoring data, harmful gas monitoring data and noise monitoring data corresponding to all intelligent well covers in the intelligent well cover networking, and fusing the water flow monitoring data, the harmful gas monitoring data and the noise monitoring data corresponding to the same intelligent well covers to acquire a first monitoring data set corresponding to all intelligent well covers;
Step S120, acquiring a past intelligent well lid sequence corresponding to each sewer health state in the sewer health state set, and arranging the intelligent well lids covered in the past intelligent well lid sequence according to descending order based on the early warning trigger time corresponding to the intelligent well lids covered in the past intelligent well lid sequence to obtain a finished past intelligent well lid sequence;
And S130, combining the first monitoring data sets corresponding to the intelligent well covers covered in the tidying past intelligent well cover sequences to obtain second monitoring data sets related to the past intelligent well cover sequences of the health states of the various sewer.
In step S110, the computer device collects and integrates multi-source monitoring data of each of the smart well lids in the smart well lid network. These data include water flow monitoring data (e.g., water level, flow), harmful gas monitoring data (e.g., hydrogen sulfide, methane), and noise monitoring data (e.g., water leakage, pipe cracking). These data originate from various sensors on the intelligent well cover, which record in real time the dynamic changes of the well cover and its surrounding environment.
For example, water flow monitoring data assume that a water level sensor of a smart well lid records water level data of [0.5 meter, 0.6 meter, 0.7 meter, 0.4 meter ] within a certain hour. Harmful gas monitoring data the hydrogen sulfide sensor of the manhole cover recorded the following concentration data [5ppm, 6ppm, 7ppm, 4ppm ] for the same period of time. Noise monitoring data noise sensors may record a series of sampled values of the audio signal that represent the noise level of the environment near the manhole cover (which may capture abnormal noise inside the sewer, such as leakage, pipe break, etc., to help locate the fault point quickly).
The computer device classifies the data according to the intelligent well lid, and fuses different types of monitoring data of the same well lid in the same time period to form a comprehensive data set, namely a first monitoring data set. This process may involve time alignment of the data, missing value processing, and outlier detection to ensure accuracy and integrity of the data.
The first monitoring data set example { "Smart well 1" { "Water flow monitoring data": [0.5, 0.6, 0.7. ], 0.4], "harmful gas monitoring data": [5, 6, 7. ], 4], "noise monitoring data": [ Audio_sample1, audio_sample2, ], "Smart well 2" { "Water flow monitoring data": [0.4, 0.5, 0.6. ], "harmful gas monitoring data": [3, 4, 5. ], "noise monitoring data": [ audio_sample3, audio_sample 4. ], and }
In step S120, the computer device obtains a past smart well lid sequence corresponding to each health status from the sewer health status set. These sequences record the intelligent well lid triggering specific health status at different time points, which is an important basis for analyzing the health status of the sewer.
For example, assume that the sewer health status set includes three states of "normal", "slightly blocked" and "severely blocked". For the "slight blockage" condition, the past smart manhole cover sequence may include smart manhole cover a (trigger time T1), smart manhole cover B (trigger time T2), and smart manhole cover C (trigger time T3).
The computer device will order the intelligent well lids based on these trigger times, typically in descending order (i.e. the most recently triggered well lids are arranged in front). This helps to give priority to recently occurring health events for subsequent analysis.
Examples of the well cover sequence include "slight blocking" [ { "Smart well cover": "Smart well cover C", "trigger time": T3}, { "Smart well cover": "Smart well cover B", "trigger time": T2}, { "Smart well cover": "Smart well cover A", "trigger time": T1} ].
In step S130, the computer device constructs a second monitoring data set using the sorted past smart well lid sequence. This set is different from the first monitoring data set in that it focuses on past triggering events for a particular sewer health status and includes the first monitoring data set of the intelligent well lid that triggered these events.
For each sewer health state, the computer device traverses the corresponding past intelligent well lid sequence, and extracts the data fragments of the well lids near the triggering moment from the first monitoring data set. The data segments are combined to form a second monitoring data set associated with the health condition.
For example, assume that a "slight blockage" condition is of concern, and that the smart well lid C is known to trigger this condition at time T3. The computer means will find data of the smart well lid C around time T3, such as data within T3-1 hours to t3+1 hours, from the first monitoring data set.
"Slight clogging" { "Smart well C" { "Water flow monitoring data": [ Water_level_data_around_T3], "harmful gas monitoring data": [ gas_concentration_data_around_T3], "noise monitoring data": [ noise_data_around_T3] }, "Smart well B": {. A "{. A }.
In this example, 'water_level_data_around_t3', 'gas_concentration_data_around_t3' and 'noise_data_around_t3' represent water flow monitoring data, harmful gas monitoring data and noise monitoring data of the intelligent manhole cover C around time T3, respectively.
The computer device successfully constructs a second set of monitoring data for each sewer health state, via step S130. These sets will serve as input data for subsequent steps (e.g., neural network tuning, commonality metrics, etc.), providing a solid basis for intelligent well lid data analysis.
In summary, step S100 effectively collects multi-source monitoring data in the intelligent well lid networking and collates past trigger event data associated with the health status of the sewer through three sub-steps of data fusion, sequence collation and data set construction.
As one embodiment, step S200, building a first network tuning template based on the second monitoring data set includes:
Step S210, determining a shielding part in the second monitoring data set based on the data screening ratio, and taking the data item of the shielding part as a candidate monitoring data set;
and S220, performing data shielding operation on the candidate monitoring data set in the second monitoring data set to obtain a basic calibration template, and adding an indication mark to the basic calibration template to obtain a first network calibration template.
In step S210, the computer means determines the masked parts of the second set of monitored data, i.e. those data items which are to be hidden or not provided in the subsequent data reasoning process. This process is based on a data screening ratio that reflects the proportion of data items that need to be masked in the entire data set.
For example, assume that the second monitoring data set comprises 100 data items, each representing monitoring data (e.g., water level, concentration of harmful gases, noise level, etc.) of one smart well lid at a particular point in time. Based on the data screening ratio (e.g., set to 20%), the computer device will select 20 data items as the mask portion, either randomly or based on a particular rule (e.g., importance of the data, timestamp, etc.).
{ "Mask part" [ { "data item ID": 1, "data type": "water level", "timestamp": "T1", "value": 0.5}, { "data item ID": 5"," data type ":" harmful gas concentration "," timestamp ":" T5"," value ": 6ppm },..{" data item ID ": 95", "data type": "noise level", "timestamp": "T95", "value": 75dB }, "unshielded part": [...
In the above example, the 20 data items selected (e.g., data items with IDs 1, 5,..95) constitute the masked portion, while the remaining data items belong to the unmasked portion. These selected data items are then used as candidate monitoring data sets that will be hidden during subsequent data reasoning to test the neural network's ability to predict unknown data.
In step S220, the computer device performs a data masking operation on the candidate monitoring data set to construct a base tuning template. The data masking operation means that data items in the candidate monitoring data set are removed from or replaced with placeholders, random values, etc. from the second monitoring data set to simulate the situation of data missing or uncertain in the real world.
For example, for the mask portion determined in step S210, the computer device may remove it directly from the second monitoring data set, or insert placeholders (e.g., "NaN", "NULL", or specifically encoded placeholder data) at corresponding locations. At the same time, the computer device may also add an indicator at the masking position in order to preserve the integrity and continuity of the data sequence.
{ "Basic calibration template" { "data item ID": 1 ": data type": "water level", "timestamp": "T1", "value": naN },// mask and add indicator { "data item ID": 2 "{" data type ": water level", "timestamp": T2"," value ": 0.6}, {" data item ID ": 3": data type ": water level", "timestamp": T3"," value ": 0.7},.{" data item ID ": 5", "data type": harmful gas concentration "," timestamp ": T5", "value": naN },// mask and add indicator }, { "data item ID": 95"," data type ": noise level", "timestamp": T95"," value ": naN }// mask and add indicator }.
In the above example, the masked data items (e.g., data items with ID 1, 5,..once., 95) have been removed or replaced with 'NaN' (representing "non-number") and an indicator is added at the corresponding location (although in this example the indicator is directly embodied on the replacement of the value, in practice the indicator may be in the form of an additional field).
The computer means then adds an indicator mark on the basic calibration template to clarify the start position of the data sequence or the position of the specific data item. These markers are critical to neural networks because they can help the network understand the structure and context information of the data.
{ "First network tuning template": the [ { "data item ID": 0, "data type": sequence START "," timestamp ": N/A", "value": "START" },// sequence START marker { "data item ID" }, "data type": water level "," timestamp ": T1", "value": naN "," indicator ": mask" }, { "data item ID": 2 ": data type": water level "," timestamp ": T2", "value": 0.6},: { "data item ID": 5"," data type ": harmful gas concentration", "timestamp": T5"," value ": naN", "indicator": mask "}, {" data item ID ": 95", "data type": noise level "," timestamp ": T95", "value": naN "," indicator ": mask" }, { "data item ID": 96 data type ": sequence END", "value": N "}/" value ": END sequence/".
In this example, marks for the beginning and END of the sequence (respectively, 'START' and 'END') are added, as well as an indicator mark (e.g., 'mask') for each masked data item. These markers will be part of the neural network input, helping the network to better understand the structure and context of the data.
Through the execution of steps S210 and S220, the computer apparatus constructs a first network tuning template. The template comprises a shielding part, an unshielded part and necessary indication marks, and provides a basis for the follow-up neural network data reasoning and commonality measurement.
As an implementation manner, step S220 performs a data masking operation on the candidate monitoring data set in the second monitoring data set to obtain a basic calibration template, including:
And S221, performing data masking operation on the candidate monitoring data sets in the second monitoring data sets based on the first masking support degree and the second masking support degree to obtain a basic tuning template, wherein the first masking support degree represents the possibility degree of changing the candidate monitoring data sets in the second monitoring data sets into placeholders, and the second masking support degree represents the possibility degree of changing the candidate monitoring data sets in the second monitoring data sets into random data.
In step S220 of the smart well lid data analysis flow, fine control of the data masking operation is key to building an effective basic tuning template. Step S221, as one of the specific embodiments of the data masking operation, provides a more flexible and controllable method for the data masking process by introducing two concepts of the first mask support and the second mask support.
In step S221, the computer device performs a data masking operation on the candidate monitoring data set in the second monitoring data set according to the first masking support degree and the second masking support degree. These two supporters represent two different data masking strategies, namely the possibility of replacing the candidate data item with a placeholder (e.g. "NaN", "NULL", etc.), and the possibility of replacing the candidate data item with random data.
A first mask Support (FIRST MASKING Support, FMS) represents a degree of likelihood of altering a data item in the candidate monitoring data set to a placeholder. A placeholder is a special marker that indicates the presence of a data item but that the specific content is unknown or unavailable. In neural network training, placeholders may help the model learn how to handle missing data.
A second mask support (Second Masking Support, SMS) indicates a degree of likelihood of altering the data items in the candidate monitoring data set to random data. Unlike placeholders, random data are values that are actually present, but they are independent of the original data, and are typically used to simulate data noise or uncertainty.
For example, suppose that the second monitoring data set contains 100 data items, wherein 20 data items are selected as candidate monitoring data sets for the masking operation. The first mask support may be set to 0.6 and the second mask support to 0.4, which means that during the masking process there is a 60% probability of replacing the data item with a placeholder and a 40% probability of replacing the data item with random data.
In a specific implementation process, the computer device traverses each data item in the candidate monitoring data set, and decides how to mask the data item according to a preset first shielding support degree and a second shielding support degree. For example:
For the first candidate data item, the computer means generates a random number (e.g. evenly distributed between 0 and 1).
If the random number is less than or equal to the first mask support (0.6), the data item is replaced with a placeholder (e.g. "NaN").
If the random number is greater than the first mask support but less than or equal to the sum of the first mask support and the second mask support (0.6+0.4=1.0), the data item is replaced with a randomly generated value.
An example of a data masking operation is to assume that there is a candidate monitoring data item with an original value of 0.75 (representing water level data of a certain smart well lid at a specific point in time). According to the shielding strategy, if the generated random number is smaller than or equal to 0.6, the shielded data item is changed into NaN'. If the generated random number is greater than 0.6 but less than or equal to 1.0, the masked data item may become a random value, such as 0.32 (this value is randomly generated over a range, independent of the original value of 0.75). By repeating this process, the computer device may mask all data items in the candidate monitoring data set to construct a base tuning template. This template will contain three types of items, raw data, placeholders and random data, which together simulate the diversity and uncertainty of the data in the real world.
After the data masking operation is completed, the computer device will obtain the base calibration template. This template contains not only the masked data item, but possibly other necessary information such as the timestamp of the data item, a type identification and possibly an indication mark (for indicating the beginning, end or position of a particular data item of the data sequence).
Basic tuning template example:
{ "base calibration template [ {" timestamp ":" T1"," type ":" water level "," value ": 0.65}, {" timestamp ":" T2"," type ":" water level "," value ": naN },// placeholder replacement {" timestamp ":" T3"," type ":" harmful gas concentration "," value ": 0.03}, {" timestamp ": T4", "type": noise level "," value ": 72.3}, {" timestamp ": T5", "type": water level "," value ": 0.45}, {" timestamp ": T6", "type": water level "," value ": 37.2},// random data replacement.].
In this example, the base calibration template contains three types of data items, original data, placeholder substitutions, and random data substitutions. These data items will be used as inputs to the neural network for training and optimizing the model to increase its processing power for missing data and noise data.
Through implementation of step S221, the computer device can flexibly control the data masking process based on the dual support policy, so as to construct a basic tuning template which is not only in accordance with the actual data distribution situation, but also has a certain challenge. This provides powerful support for subsequent neural network tuning and optimization.
In step S200, as an implementation manner, performing data reasoning on the first network tuning template according to the initial neural network to obtain a reasoning confidence level corresponding to the shielding part in the first network tuning template, where the reasoning confidence level includes:
Step S230, loading a first network tuning template into an initial neural network, and carrying out feature vector representation on the first network tuning template based on the initial neural network to obtain a first template feature vector representation corresponding to the first network tuning template;
And step 240, carrying out data reasoning on the shielding part in the first network tuning template based on the first template feature vector representation to obtain the reasoning confidence corresponding to the shielding part in the first network tuning template.
In step S230, the computer device loads the first network tuning template into the initial neural network and performs feature vector representation of the tuning template based on the neural network. Feature vector representation is a core concept in machine learning that captures the intrinsic features and patterns of data by converting raw data into vectors in a high-dimensional space.
In a specific example:
1. Data preprocessing-the computer device may need to preprocess the data in the template, such as data normalization, transcoding, etc., before loading the tuning template to ensure that the data meets the input requirements of the neural network.
2. Model loading, the computer device loads the initial neural network model into the memory, and the characteristic extraction and the reasoning calculation are prepared. The initial neural network is a model which is subjected to preliminary training and has certain data processing and pattern recognition capabilities.
3. Feature extraction once the tuning template is loaded into the neural network, the computer device processes the data in the template using the feature extraction layer (e.g., convolutional layer, pooling layer, etc.) of the neural network. These layers transform the raw data into a higher level representation of the features through a series of mathematical operations (e.g., convolution, pooling, etc.).
4. Feature vector generation the computer device obtains a series of feature maps (feature maps) or feature vectors through processing by the feature extraction layer. The feature vectors capture key information and modes of data in the adjustment template, and provide a basis for subsequent data reasoning.
5. The first template feature vector representation finally, the computer means will combine all feature vectors to form a complete feature vector representation, i.e. the first template feature vector representation. This vector represents the position of the first network tuning template in the high-dimensional feature space in the neural network.
For example, assume that the first network tuning template contains the following data items:
{ "timestamp": "T1", "type": "water level", "value": 0.65 "," timestamp ":" T2"," type ":" water level "," value ": naN,// mask portion" timestamp ":" T3"," type ":" harmful gas concentration "," value ": 0.03,...
In the eigenvector representation, the neural network may convert each data item into a series of numerical characteristics, such as the height of the water level, the concentration of harmful gases, etc. For masked parts (e.g., water level data under a T2 timestamp), the neural network may infer its possible values based on other unmasked data items and learning rules inside the model.
After feature extraction and combination, the first template feature vector representation may be similar in form of [0.32, 0.15, 0.68,..na_ inferred _value,.].
Where 'nan_ inferred _value' is the inferred value of the neural network to the mask portion (water level data under T2 timestamp), although in actual representation this inferred value may be represented by a placeholder or special code, it is assumed here for the purpose of illustrating the problem that it has been converted to a specific value.
In step S240, the computer device performs data reasoning on the mask portion in the first network tuning template based on the first template feature vector representation to obtain a reasoning confidence. The inference confidence is an important index for measuring the prediction accuracy of the neural network on the shielding part.
Specific examples are:
1. inference calculation after obtaining the first template feature vector representation, the computer device inputs this vector to the inference layer (e.g., full connection layer, output layer, etc.) of the neural network for calculation. The inference layer predicts the true value of the mask portion by a series of mathematical operations (e.g., matrix multiplication, nonlinear activation, etc.).
2. Confidence assessment the neural network will typically output a probability distribution or confidence score during the inference process to represent the degree of confidence for each possible predicted value. For the predicted value of the mask portion, the computer means may extract a corresponding confidence score as the inference confidence.
3. Confidence processing-in practical applications, the inference confidence may require further processing and analysis. For example, the computer device may convert the confidence score into a percentage form or determine the confidence of the predicted value based on a particular threshold.
For example, continuing with the example above, assume that the neural network extrapolates 0.55 to the mask portion (water level data under the T2 timestamp) and outputs a confidence score of 0.8. This means that the neural network has 80% confidence that the water level data at the T2 timestamp is 0.55.
In practice, this confidence score may be used to evaluate the prediction accuracy of the neural network. If the confidence is high (e.g., near 1.0), it is indicated that the prediction of the neural network is more reliable, and if the confidence is low (e.g., near 0.0), it is indicated that the prediction may have a greater uncertainty or error.
Through the implementation of steps S230 and S240, the computer apparatus can perform feature vector representation and data reasoning on the first network tuning template based on the initial neural network, thereby obtaining a reasoning confidence of the mask portion. This process not only helps to evaluate the performance of the neural network, but also provides an important reference for subsequent model tuning and optimization.
As an implementation manner, step S300, based on the initial neural network, performs a commonality measurement on the first network calibration template and a first trigger calibration template corresponding to the first network calibration template, to obtain a first template commonality measurement result, including:
Step S310, a first monitoring data set of an intelligent well lid in a next trigger target state corresponding to a first network adjusting template is used as a first trigger adjusting template, wherein the first network adjusting template and the first trigger adjusting template belong to the intelligent well lid in a past trigger target state in the same sewer health state;
step S320, loading a first trigger alignment template to an initial neural network, and carrying out feature vector representation on the first trigger alignment template based on a feature extraction component in the initial neural network to obtain a second template feature vector representation corresponding to the first trigger alignment template;
step S330, a first sewer health state feature vector representation related to the sewer health state related to the first trigger adjustment template is obtained, and a first template commonality measurement result is obtained based on the first sewer health state feature vector representation, the second template feature vector representation and the first template feature vector representation corresponding to the first network adjustment template.
In step S310, the computer device determines a first trigger calibration template. This template is associated with a first network tuning template that represents a first set of monitoring data for the next intelligent manhole cover that triggers the same sewer health status.
Specific examples are:
1. And identifying the target state, namely firstly, the computer device needs to identify the health state of the target sewer corresponding to the first network adjusting template. The target state is the health state triggered by all intelligent well covers in the adjustment template.
2. And triggering well lid screening, namely screening the next well lid triggering the target state from all intelligent well lids triggering the target state by the computer device. This first set of monitoring data for the manhole cover will be used as a first trigger calibration template.
3. And finally, the computer device associates the first trigger adjustment template with the first network adjustment template to ensure that the first trigger adjustment template and the first network adjustment template belong to past trigger events with the same sewer health state.
For example, assume that the first network tuning template corresponds to a "light blockage" condition, and that the template includes a first set of monitoring data for the smart well lid a and the smart well lid B when the "light blockage" condition is triggered. The computer device recognizes that the next smart well lid that triggers the "slight blockage" condition is the smart well lid C. Thus, the first monitoring data set of the smart well lid C will be determined as the first trigger calibration template.
In step S320, the computer device loads the first trigger calibration template into the initial neural network and represents it with feature vectors using feature extraction components in the network. This process is similar to the processing of the first network tuning template in step S230.
Specific examples are:
1. And data loading, namely firstly, loading a first trigger adjustment template into an initial neural network by a computer device, and preparing for feature extraction.
2. Feature extraction the computer means then processes the data in the first trigger alignment template using a feature extraction layer (e.g., convolutional layer, pooling layer, etc.) in the neural network. These layers transform the raw data into a higher level representation of the features through a series of mathematical operations.
3. Feature vector generation the computer means will obtain a feature vector representation, i.e. a second template feature vector representation, by processing of the feature extraction layer. This vector captures key information and patterns of data in the first trigger alignment template.
For example, continuing with the example above, assume that the first set of monitoring data for the smart well lid C includes data for a plurality of dimensions including water level, concentration of harmful gases, and noise level. In the feature extraction process, the neural network may convert these data into a series of numerical features, such as the height of the water level, the concentration of harmful gases, and the spectral features of noise. Eventually, these features will be combined into one feature vector representation, the second template feature vector representation.
In step S330, the computer device will calculate a first template commonality metric result based on the first sewer health status feature vector representation, the second template feature vector representation, and the first template feature vector representation. This process involves similarity and correlation analysis between multiple vectors.
Specific examples are:
1. Feature vector acquisition first, the computer device needs to acquire a feature vector representation of the first sewer health state. This vector, which may be obtained in a previous step by clustering, classification or other machine learning algorithm, represents a characteristic feature of the state of health in a feature space.
2. Vector similarity calculation the computer means will then calculate a similarity between the second template feature vector representation and the first sewer health state feature vector representation. This may be achieved by cosine similarity, euclidean distance, or other metrics.
3. And finally, comprehensively evaluating the commonality measurement result between the first network tuning template and the first trigger tuning template by the computer device by combining the characteristic vector representation of the first template (namely the characteristic vector representation of the first network tuning template) and the similarity calculation result. This process may involve complex mathematical operations and logical reasoning.
For example, assume that the feature vector for which the first sewer health state has been obtained is expressed as [ h1, h2, h3 ], the second template feature vector is expressed as [ f1, f2, f3 ] ], and the first template feature vector is expressed as [ t1, t2, t3 ] ]. The computer device first calculates cosine similarity between [ f1, f2, f3, ] and [ h1, h2, h3, ], resulting in a similarity score S1. The computer device may then consider the similarity between the first template feature vector representation [ t1, t2, t3, ] and [ h1, h2, h3, ], provided S2, and the similarity between [ t1, t2, t3, ] and [ f1, f2, f3, ], provided S3. Finally, the computer means may perform a weighted summation of S1, S2 and S3 or other combinations to obtain the first template commonality measurement result.
Through the implementation of steps S310 to S330, the computer device can perform a commonality measure on the first network tuning template and the first trigger tuning template based on the initial neural network, so as to obtain similarity and relevance information between them. These information are critical to the subsequent model tuning and optimization, which helps to enhance the processing and analysis capabilities of the neural network on the smart well lid data.
In step S330, a first template commonality measurement result is obtained based on the first sewer health state feature vector representation, the second template feature vector representation and the first template feature vector representation corresponding to the first network calibration template, wherein the number of the first network calibration templates is x, and x is more than or equal to 1, and the method comprises the following steps:
Step S331, obtaining a first commonality measurement result between a first sewer health state feature vector representation and a second template feature vector representation, and performing exponential transformation on the first commonality measurement result to obtain a first temporary commonality measurement result;
step S332, obtaining a second commonality measurement result between the first sewer health state feature vector representation and the first template feature vector representation corresponding to each first network adjustment template, and performing exponential transformation on the second commonality measurement result to obtain a second temporary commonality measurement result related to each first network adjustment template;
And S333, summing the second temporary commonality measurement results related to each first network adjustment template to obtain a commonality measurement sum value, and determining the commonality measurement result of the first template based on the ratio between the first temporary commonality measurement result and the commonality measurement sum value.
In step S330 of the smart well lid data analysis flow, calculating the first template commonality metric is a key step in evaluating the similarity and association between the first network tuning template and the first trigger tuning template. When the number of the first network calibration templates is x (x is equal to or greater than 1), the calculation process becomes more complex because it needs to comprehensively consider the common measurement result between the plurality of calibration templates and the feature vector of the health state of the sewer.
In step S331, the computer means calculates a first commonality measurement result between the first sewer health status feature vector representation and the second template feature vector representation, and performs an exponential transformation on the result to obtain a first temporary commonality measurement result.
Specific examples are:
1. Commonality metric calculation first, the computer device needs to calculate the commonality metric result between the two feature vectors. This can be achieved by various methods such as cosine similarity, euclidean distance, pearson correlation coefficient, etc. Here, it is assumed that cosine similarity is used as a measurement method. The calculation formula of cosine similarity is as follows: ;
Wherein, AndRespectively representing two feature vectors of the image,The dot product of the vectors is represented,Representing the modulus (i.e., length) of the vector.
The first sewer health status feature vector representation (set as) And a second template feature vector representation (set to) Substituting the above formula to obtain a first common measurement result (set as)。
2. Exponential transformation the computer means then requires an exponential transformation of the first common metric result. An exponential transformation is a nonlinear transformation that can map the measurement result to a new range of values, thereby enhancing the degree of discrimination of the result. The formula of the exponential transformation is: wherein, the method comprises the steps of, wherein, Is the base of the natural logarithm,Is an adjustable parameter for controlling the intensity and direction of the transformation. In the practical application of the present invention,The value of (2) may be determined according to specific requirements and data characteristics.
For example, assume that a first sewer health status feature vector is represented asThe second template feature vector is expressed as. The first similarity measurement result calculated according to the cosine similarity formula is. If get=1, The first temporary commonality measurement result is
In step S332, the computer device will calculate second commonality metric results between the first sewer health status feature vector representations and the first template feature vector representations corresponding to the respective first network calibration templates, and perform an exponential transformation and summation process on these results.
Specific examples are:
1. commonality metric calculation for each first network tuning template (set as the ith template, where The computer device needs to calculate the corresponding first template feature vector representation (set as @) Is expressed with the characteristic vector of the health state of the first sewer) The result of the commonality measurement between them (set as). This can also be achieved by cosine similarity or other metrics.
2. Index transformation similar to step S331, the computer means performs an index transformation on each second commonality measurement result to obtain a second temporary commonality measurement result (set as). The formula of the exponential transformation is: wherein, the method comprises the steps of, wherein, Is the same adjustable parameter as in step S331.
3. Summation processing finally, the computer means need to sum all the second temporary commonality measurement results to obtain a commonality measurement sum (set to). The formula of the summation process is: For example, continuing with the example above, assume that there are 3 first network tuning templates (i.e., x=3) whose corresponding first template feature vectors are represented as respectively . The second common measurement results obtained through cosine similarity calculation are respectively. If getThe corresponding second temporary commonality measurement results are respectively =1. The sum of the commonality metrics is
In step S333, the computer device will determine a first template commonality metric result based on the ratio between the first temporary commonality metric result and the commonality metric sum value.
Specific examples are:
1. Ratio calculation the computer means first calculates the sum of the first temporary commonality measurement (E_1) and the commonality measurement ) The ratio between (set as R). The calculation formula of the ratio is as follows:
2. The result is determined by the computer means using this ratio R as a first template commonality measure. This result reflects the overall level of the commonality measure between the first trigger calibration template and the first sewer health status feature vector relative to the commonality measure between all of the first network calibration templates and the feature vector.
For example, continuing with the example above, it is known that the first temporary commonality measure results in=2.59, The sum of the commonality metrics is=6.92. Thus, the first template commonality measure results in. This result represents that the common measure between the first trigger calibration template and the first sewer health status feature vector is about 37% of the sum of the common measures between all calibration templates and the feature vector.
Through implementation of steps S331 to S333, the computer device can comprehensively consider the common measurement results between the plurality of first network calibration templates and the feature vectors of the health status of the sewer, so as to more accurately evaluate the similarity and the association between the first triggering calibration templates and the health status of the sewer. This provides an important reference for subsequent model tuning and optimization.
In one embodiment, the number of shielding parts in the first network calibration template is y, and y is greater than or equal to 1, and step S400 is performed on the initial neural network based on the reasoning confidence level, the shielding parts in the first network calibration template and the first template commonality measurement result to obtain a calibrated initial neural network, and the method comprises the following steps:
Step S410, carrying out logarithmic transformation on the reasoning confidence levels corresponding to the y shielding parts to obtain the probability degree transformation results of each shielding part, and carrying out summation processing on the probability degree transformation results of each shielding part to obtain a summation result which is used as a prediction error of shielding data;
Step S420, determining a network tuning error corresponding to the initial neural network based on the prediction error of the shielding data and the first template commonality measurement result;
And step S430, repeatedly correcting the parameter of the initial neural network based on the network adjustment error, and stopping optimization when the network adjustment error meets the preset convergence requirement to obtain the adjusted initial neural network.
In step S410, the computer device performs logarithmic transformation on the inference confidence levels corresponding to the y mask portions in the first network tuning template, so as to obtain a probability degree transformation result of each mask portion. These transform results are then summed to obtain the masked data prediction error.
Specific examples are:
1. Logarithmic transformation-logarithmic transformation is a common data preprocessing method that can map the original data to a new range of values, thereby enhancing the separability and stability of the data. Here, the computer means logarithmically transforms the confidence level of reasoning corresponding to the y mask portions, the transformation formula being: . Wherein i represents an i-th shielding portion, Representing a natural logarithmic function.
2. And (3) summation processing, namely after logarithmic transformation is completed, the computer device performs summation processing on all transformation results to obtain a shielding data prediction error. The summation formula is: Where y is the number of shielding portions.
For example, assume that there are 3 mask portions (i.e., y=3) that correspond to 0.8, 0.7, and 0.9, respectively, of inference confidence. First, these confidence coefficients are logarithmically transformed to obtain transformation results of respectively. Then, these transformation results are summed to obtain a masked data prediction error of-0.357.
In step S420, the computer device will determine a network tuning error corresponding to the initial neural network based on the masked data prediction error and the first template commonality metric result.
Specific examples are:
1. Commonality measure fusion first, the computer means may need to subject the first template commonality measure to an appropriate scaling or normalization process to ensure that it is on the same order of magnitude as the mask data prediction error. This can be achieved by a simple linear transformation or a nonlinear transformation.
2. And (3) error comprehensive calculation, wherein the computer device performs weighted summation or other forms of combination on the prediction error of the shielding data and the first template commonality measurement result to obtain the network adjustment error. The formula of the weighted summation is: . Wherein, Is a weight coefficient used to balance the contribution of the masked data prediction error and the commonality metric result to the network tuning error.
For example, continuing with the example above, assume that the first template commonality measure results in 0.75 (after having been appropriately scaled or normalized), the weight coefficients=1 And=0.5. The network tuning error is 1× (-0.357) +0.5x0.75= -0.357+0.375=0.018.
In step S430, the computer device repeatedly corrects the parameter of the initial neural network based on the network adjustment error, until the network adjustment error meets the preset convergence requirement, and then stops optimizing, thereby obtaining the initial neural network after adjustment.
Specific examples are:
1. gradient calculation firstly, the computer device needs to calculate the gradient of the network adjustment error relative to the neural network parameter. This is typically achieved by a back-propagation algorithm that propagates the error signal back layer by layer starting from the output layer and calculates the gradient of each layer parameter.
2. Parameter updating, the computer device updates the parameter of the neural network according to the calculated gradient. Common updating methods include random gradient descent (SGD), adam optimizer, etc. The update formula is generally of the form: Wherein, the method comprises the steps of, The parameter values representing the t-th iteration,Is the rate of learning to be performed,Is the network tuning error J related to the parameterIs a gradient of (a).
3. And (3) convergence judgment, namely after each parameter update, checking whether the network adjustment error meets a preset convergence requirement or not by the computer device. Convergence requirements are typically set based on the magnitude or rate of change of the network tuning error. If the network tuning error is smaller than a certain threshold value or the variation of a plurality of continuous iterations is smaller than a certain threshold value, the network is considered to be converged, and the optimization process is stopped.
4. And finally, saving the initial neural network model after the adjustment into a storage medium by the computer device for subsequent use.
For example, assume that in the initial iteration, the network tuning error is 0.018 (as shown in the above example). The computer device uses Adam optimizer to update the parameters and sets the convergence threshold to 0.001. After multiple iterations, if the network tuning error falls to 0.0009 (less than the convergence threshold), the optimization process is stopped, and the tuned neural network model is saved to the disk.
Through the implementation of steps S410 to S430, the computer device can accurately calibrate the initial neural network based on the inference confidence, the masking portion and the commonality measurement result, thereby improving the capability of processing the intelligent well lid monitoring data.
As an implementation manner, step S500, constructing a second network tuning template based on the second monitoring data set, and performing a commonality measurement on the second network tuning template and a second triggering tuning template corresponding to the second network tuning template based on the tuned initial neural network, to obtain a second template commonality measurement result, including:
step S510, adding an indication mark into the second monitoring data set to obtain a second network adjusting template, and determining a second triggering adjusting template corresponding to the second network adjusting template;
Step S520, obtaining a third template feature vector representation corresponding to a second network adjustment template based on the initial neural network, and obtaining a fourth template feature vector representation corresponding to a second trigger adjustment template based on the initial neural network;
Step S530, obtaining a second sewer health state feature vector representation associated with the sewer health state associated with the second trigger adjustment template, and obtaining a second template commonality measurement result based on the second sewer health state feature vector representation, the third template feature vector representation and the fourth template feature vector representation.
In step S510, the computer device adds an indicator to the second set of monitoring data to obtain a second network tuning template and determines a second trigger tuning template corresponding to the template.
Specific examples are:
1. The addition of an indicator is a special symbol or code that identifies a particular location or event in the data sequence. During construction of the second network tuning template, the computer device may add these markers at appropriate locations of the second monitoring data set to indicate the beginning, end or location of a particular data item of the data sequence. For example, the beginning of the data sequence may be marked with "START", the END of the data sequence may be marked with "END", or other marks may be used to indicate the location of the mask portion.
2. And generating a second network adjusting template, namely converting the second monitoring data set into the second network adjusting template after adding the indication mark. The template contains the original monitoring data and the indication marks, and provides more abundant information for training and reasoning of the neural network.
3. The determination of a second trigger calibration template, similar to the first trigger calibration template, is associated with a second network calibration template, which represents a set of intelligent manhole cover data that triggers the same sewer health status. The computer device screens the next well lid triggering the state from all intelligent well lids triggering the state according to the health state of the sewer corresponding to the second network adjusting template, and determines the data set of the well lid triggering the state as the second network adjusting template.
For example, assume that the second monitoring data set includes monitoring data of a plurality of smart well lids at different time points, such as water level, harmful gas concentration, noise level, and the like. The computer means incorporates "START" and "END" flags in these data to indicate the beginning and END of the data sequence. The marked data set is added to form a second network adjustment template. Then, the computer device identifies that the sewer health status corresponding to the second network adjustment template is "slightly blocked", and screens out the next intelligent well lid C triggering the "slightly blocked" status. The data set of the intelligent well lid C is determined to be a second trigger alignment template.
In step S520, the computer device obtains the feature vector representations corresponding to the second network calibration template and the second trigger calibration template based on the calibrated initial neural network, respectively.
Specific examples are:
1. Firstly, loading the initial neural network after the adjustment into a memory by a computer device, and respectively inputting a second network adjustment template and a second trigger adjustment template into the network.
2. Feature extraction and representation the feature extraction layer in the neural network then processes the input data and converts the raw data into feature vectors in high-dimensional space through a series of mathematical operations (e.g., convolution, pooling, full concatenation, etc.). For the second network tuning template, the computer device obtains a third template feature vector representation, and for the second trigger tuning template, the computer device obtains a fourth template feature vector representation.
For example, continuing with the example above, assume that the initial neural network after tuning is a Convolutional Neural Network (CNN). When the second network tuning template and the second trigger tuning template are input into the network, the feature extraction layer of the CNN performs convolution operation and pooling operation on the data, so as to extract key features in the data. For the second network tuning template, the CNN may output a feature vector including a plurality of values as a third template feature vector representation, and for the second trigger tuning template, the CNN may output a feature vector as a fourth template feature vector representation.
In step S530, the computer device will calculate a second template commonality metric result based on the sewer health status feature vector representation, the third template feature vector representation, and the fourth template feature vector representation associated with the second trigger calibration template.
Specific examples are:
1. Acquisition of a health status feature vector representation first, the computer device needs to acquire a sewer health status feature vector representation associated with the second trigger calibration template. This vector, which may be obtained in a previous step by clustering, classification or other machine learning algorithm, represents a characteristic feature of the state of health in a feature space.
2. The commonality metric is calculated, and then the computer means will calculate a commonality metric between the fourth template feature vector representation and the second sewer health status feature vector representation, and a commonality metric between the third template feature vector representation and the fourth template feature vector representation. This can be achieved by various methods such as cosine similarity, euclidean distance, pearson correlation coefficient, etc. Here, it is assumed that cosine similarity is used as a measurement method.
3. And finally, the computer device can carry out weighted summation or other forms of combination on the two commonality measurement results to obtain a second template commonality measurement result. This result reflects the similarity and relevance of the second network tuning template, the second trigger tuning template, and their associated sewer health status in the feature space.
For example, continuing with the example above, assume that a second sewer health status feature vector representation has been obtained asThe fourth template feature vector is expressed asAnd the third template feature vector is expressed as. The computer device first calculatesThe cosine similarity between the two is used for obtaining a similarity scoreThen calculateThe cosine similarity between the two is used for obtaining a similarity score. Finally, the computer means may weight and sum the two similarity scores (e.g) WhereinIs a weight coefficient used to balance the contribution of the two similarity scores to the final result. Calculated and obtainedAnd the second template commonality measurement result is obtained.
Through the implementation of steps S510 to S530, the computer device is able to construct a second network tuning template based on the second set of monitoring data and evaluate the similarity and correlation between the second network tuning template, the second trigger tuning template, and their associated sewer health states through a commonality metric calculation.
As an embodiment, the method further comprises:
Step S700, adding an indication mark into a first monitoring data set corresponding to each intelligent well lid in the intelligent well lid networking to obtain intelligent well lid input data corresponding to each intelligent well lid in the intelligent well lid networking;
Step S800, intelligent well lid input data corresponding to a first intelligent well lid in the intelligent well lid networking are loaded to an optimized initial neural network, and intelligent well lid feature vector representation corresponding to the first intelligent well lid is obtained based on the optimized initial neural network;
Step S900, obtaining intelligent well lid feature vector representations corresponding to all intelligent well lids in the intelligent well lid networking, and adding the intelligent well lid feature vector representations corresponding to all intelligent well lids into an intelligent well lid coding feature library.
In a further derivative embodiment of the smart well lid data analysis procedure, steps S700 to S900 relate to the process of preprocessing, feature extraction and feature library construction of individual smart well lid data in the smart well lid networking. The method aims at converting the original monitoring data into more meaningful characteristic representation, and establishing a coding characteristic library containing all intelligent well lid characteristic vectors, so as to provide support for subsequent data analysis and application.
In step S700, the computer device adds an indication mark to the first monitoring data set corresponding to each intelligent well lid in the intelligent well lid networking to obtain intelligent well lid input data.
Specific examples are:
1. First, the computer device needs to collect the first monitoring data set of all intelligent well covers in the intelligent well cover networking. These data sets contain the monitoring data of each intelligent manhole cover at different time points, such as water level, harmful gas concentration, noise level, etc.
2. The computer device then adds an indicator to the first set of monitoring data for each of the smart well covers. These marks are used to indicate the beginning, end, or location of particular data items of the data sequence, helping the neural network to better understand the structure and context information of the data as it is processed. For example, the beginning of the data sequence may be marked with "START", the END of the data sequence may be marked with "END", or other marks may be used to indicate the location of the mask portion or outlier.
3. And after the indication mark is added, the first monitoring data set of each intelligent well lid is converted into intelligent well lid input data. The data contains the original monitoring data and the indication marks, and provides a basis for subsequent feature extraction and encoding feature library construction.
For example, assume that there are two smart well lids a and B in the smart well lid network, and their first monitoring data sets are as follows:
the intelligent well lid A is [ water level_A1, harmful gas concentration_A1, noise level_A1, ], water level_an, harmful gas concentration_an, noise level_an ];
The intelligent well lid B is [ water level_B1, harmful gas concentration_B1, noise level_B1, ], water level_Bm, harmful gas concentration_Bm, noise level_Bm ];
After the computer device adds the indication marks into the data sets, intelligent well lid input data are obtained:
Smart manhole cover A [ START, water level_A1, harmful gas concentration_A1, noise level_A1, ], water level_an, harmful gas concentration_an, noise level_an, END ];
Smart well lid B [ START, water level_B1, harmful gas concentration_B1, noise level_B1, ], water level_Bm, harmful gas concentration_Bm, noise level_Bm, END ];
In step S800, the computer device loads the smart well lid input data corresponding to the first smart well lid in the smart well lid networking into the optimized initial neural network to obtain the feature vector representation corresponding to the smart well lid.
Specific examples are:
1. The method comprises the steps of model loading and data input, wherein firstly, a computer device loads an optimized initial neural network into a memory, and inputs intelligent well lid input data of a first intelligent well lid (such as an intelligent well lid A) into the network.
2. The neural network processes the input data of the intelligent well lid, and converts the original data into feature vectors in a high-dimensional space through a series of mathematical operations (such as convolution, pooling, full connection and the like). This process is similar to the feature vector representation acquisition process in step S520.
3. The generation of the feature vector representation, namely, the feature vector representation of the intelligent well lid A is obtained by the computer device after the feature extraction layer of the neural network is processed. This vector captures key information and patterns in the intelligent manhole cover a monitoring data.
For example, continuing with the example above, assume that the optimized initial neural network is a Convolutional Neural Network (CNN). When the intelligent well lid input data of the intelligent well lid A is input into the CNN, the feature extraction layer of the CNN performs convolution operation and pooling operation on the data, so as to extract key features in the data. Finally, the CNN may output a feature vector containing 128 values as a feature vector representation of the smart well lid a, such as [ f1_a, f2_a, f3_a, ], f128_a ].
In step S900, the computer device obtains feature vector representations corresponding to all the smart well lids in the smart well lid network, and adds the feature vector representations to a unified coding feature library.
Specific examples are:
1. And collecting the characteristic vector representations, namely firstly, the computer device needs to process each intelligent well lid in the intelligent well lid networking in turn according to the mode of the step S800, and obtain the characteristic vector representations corresponding to the intelligent well lids. These feature vector representations constitute the underlying data of the coding feature library.
2. The establishment of the coding feature library, the computer device will then sort all the collected feature vector representations into a unified database or data structure, i.e. the intelligent well lid coding feature library. The library contains unique identification (such as manhole cover ID) of each intelligent manhole cover and corresponding characteristic vector representation, so that subsequent data retrieval and analysis are facilitated.
3. Library maintenance and update-over time and the generation of new monitoring data, the computer device may need to update the coding feature library periodically. This may be accomplished by adding a newly acquired feature vector representation to the library, or by cleaning and replacing old data in the library as needed.
For example, assuming N intelligent well lids (N.gtoreq.2) in the intelligent well lid network, the computer device has obtained a feature vector representation of each intelligent well lid in the manner of step S800. The feature vector representations may be organized into a two-dimensional array or similar data structure, such as [ [ well lid id_1, [ f1_1, f2_1, f3_1, ], [ well lid id_2, [ f1_2, f2_2, f3_2, ], f128_2] ], well lid id_n, [ f1_n, f2_n, f3_n, ], f128_n ] ].
The array forms the intelligent well lid coding feature library. In the subsequent data analysis, the computer device can quickly retrieve the corresponding characteristic vector representation according to the well lid ID, and perform tasks such as pattern recognition, anomaly detection or predictive analysis by utilizing the characteristic vectors.
Through the implementation of steps S700 to S900, the computer device can convert the raw monitoring data in the smart well lid networking into a more meaningful feature vector representation, and build a coded feature library containing all the smart well lid feature vectors. The process not only helps to promote the efficiency of data analysis and application, but also provides powerful support for the fine management of smart cities.
As an embodiment, the method further comprises:
step S1000, constructing a third network adjustment template of a state triggering early warning network based on intelligent well lid feature vector representation in an intelligent well lid coding feature library and past intelligent well lid sequences corresponding to all sewer health states in a sewer health state set;
The method comprises the steps of S1100, adjusting a state trigger early warning network based on a third network adjusting template to obtain an adjusted state trigger early warning network, wherein the adjusting process of the state trigger early warning network comprises a first process and a second process, the first process is used for adjusting sequence prediction learning and stopping adjusting the intelligent well lid feature vector representation, the second process is used for adjusting the intelligent well lid feature vector representation and stopping adjusting the sequence prediction learning, and the adjusted state trigger early warning network is used for performing state trigger early warning.
Steps S1000 and S1100 aim to construct a state triggering early warning network capable of accurately predicting and early warning the health state of the sewer by using the intelligent well lid coding feature library and the past intelligent well lid sequence information.
In step S1000, the computer device constructs a third network tuning template for the status-triggered alert network based on the smart well lid feature vector representation in the smart well lid code feature library and the past smart well lid sequences corresponding to each of the sewer health states in the set of sewer health states.
Specific examples are:
1. First, the computer device needs to extract the feature vector representation of all the intelligent well lids from the intelligent well lid coding feature library. The feature vectors capture key information and modes in each intelligent well lid monitoring data, and are the basis for constructing a third network adjustment template.
2. Sequence matching-next, the computer device needs to match the smart well lid feature vector representation to each health state in the sewer health state set. For each health state, the computer device screens out the intelligent well lids that triggered the state from the past sequence of intelligent well lids and obtains the feature vector representations of these well lids before and after the triggering state.
3. Template construction based on the data obtained by matching, the computer device can construct a third network tuning template. The template comprises a plurality of data segments, each data segment is composed of a series of feature vectors, and the feature changes of the intelligent well lid before and after triggering the specific health state are represented. In addition, the templates may also contain other relevant information such as trigger time, health status type, manhole cover position, etc.
For example, assume that the smart well lid code feature library contains 100 smart well lid feature vector representations, and three states of "normal", "slightly blocked" and "severely blocked" are defined in the sewer health state set. The computer device firstly extracts the feature vectors of all the well covers from the feature library. Then, for the "slightly blocked" condition, the computer device will screen out all well lids that triggered the condition (e.g., well lid a, well lid B, and well lid C) and obtain their feature vector representations before and after triggering the "slightly blocked" condition. Finally, the computer device constructs a third network tuning template containing the feature vector representations for subsequent state-triggered early warning network tuning.
In step S1100, the computer device calibrates the status triggering warning network based on the third network calibration template to obtain a network model capable of accurately predicting and warning the health status of the sewer.
Specific examples are:
1. Network initialization-firstly, a computer device needs to initialize a state trigger early warning network. This network may be a complex deep learning model such as a Recurrent Neural Network (RNN), long-short-term memory network (LSTM), or gated loop unit (GRU), etc. for processing sequence data and predicting future health.
2. The adjusting process of the state triggering early warning network comprises a first process and a second process.
First procedure in the first procedure the computer means mainly focuses on the tuning of the sequence prediction learning. This means that the network will learn how to predict future health sequences from past feature vector sequences. During this process, the computer device may fix the tuning of the smart well lid feature vector representation (i.e., not update the weights of these features) while focusing on optimizing the network parameters associated with the sequence prediction.
In the second process, the computer device focuses on the adjustment of the feature vector representation of the intelligent well lid. This means that the network will learn how to better extract and represent the monitored data features of the smart well lid to improve the accuracy of state prediction. In this process, the computer means may fix the parameters learned by the sequence prediction (i.e. not update the weights associated with the sequence prediction) while focusing on optimizing feature extraction and representing the relevant network parameters.
By alternating these two processes, the computer device can gradually optimize the overall performance of the state-triggered early warning network.
3. Convergence judgment and model preservation, namely after each tuning iteration, the computer device needs to check whether the performance of the network on the verification set is improved or not. If the performance improvement reaches the preset convergence requirement (if the accuracy improvement amplitude is smaller than a certain threshold), stopping the adjustment process, and storing the current network model as a final state triggering early warning network.
For example, continuing with the example above, assume that the computer device has constructed an LSTM based state trigger alert network. In the first process, the computer device fixes the eigenvector input weights of the LSTM layer and optimizes the state update and output weights of the LSTM cells to improve the accuracy of the sequence prediction. In the second process, the computer device fixes the state update and output weights of the LSTM cells and optimizes the weights of the feature extraction layer (such as the weights of the convolution layer or the full connection layer) to improve the quality of the representation of the feature vectors of the smart well lid. By alternately performing the two processes, the computer device can gradually improve the overall performance of the state triggering early warning network until the convergence requirement is met.
Through the implementation of steps S1000 and S1100, the computer device constructs a state trigger early warning network for accurately predicting and early warning the health state of the sewer. This process involves not only complex data processing and machine learning techniques, but also embodies fine tuning and optimization of the sequence data and the feature representation.
In summary, the invention combines the first monitoring data sets corresponding to the intelligent well lids covered in the past intelligent well lid sequence of each sewer health state by acquiring the first monitoring data sets corresponding to each intelligent well lid in the intelligent well lid networking, acquires the second monitoring data sets corresponding to the past intelligent well lid sequence, and generates a first network adjusting template for adjusting the initial neural network and a second network adjusting template for optimizing the adjusted initial neural network based on the second monitoring data sets. And according to the first network adjustment template, adjusting the initial neural network based on a parallel mode of data reasoning and commonality measurement to obtain the adjusted initial neural network. And optimizing the adjusted initial neural network based on the mode of the commonality measurement according to the second network adjustment template to obtain an optimized initial neural network. The initial neural network after optimization can generate an initialized intelligent well lid feature vector representation for state triggering early warning, and the feature of the past intelligent well lid sequence is mined by combining the capability of the initial neural network, so that the characterization quality of the intelligent well lid feature vector representation is improved, and the triggering early warning accuracy is further improved.
The embodiment of the invention provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes part or all of the steps in the method when executing the program.
Embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs some or all of the steps of the above-described method. The computer readable storage medium may be transitory or non-transitory.
Embodiments of the present invention provide a computer program comprising computer readable code which, when run in a computer device, causes a processor in the computer device to perform some or all of the steps for carrying out the above method.
Embodiments of the present invention provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program which, when read and executed by a computer, performs some or all of the steps of the above-described method. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In some embodiments, the computer program product is embodied as a computer storage medium, and in other embodiments, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
Fig. 2 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present invention, and as shown in fig. 2, the hardware entity of the computer device 1000 includes a processor 1001 and a memory 1002, where the memory 1002 stores a computer program that can be run on the processor 1001, and the processor 1001 implements steps in the method of any of the embodiments described above when executing the program.
The foregoing is merely an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present invention, and the changes and substitutions are intended to be covered by the scope of the present invention.

Claims (11)

1.一种基于物联网的智慧井盖数据分析方法,其特征在于,包括:1. A smart manhole cover data analysis method based on the Internet of Things, characterized by comprising: 获取智慧井盖组网中的各个智慧井盖对应的第一监测数据集合,基于所述第一监测数据集合,获取下水道健康状态集合中的各个下水道健康状态的过往智慧井盖序列关联的第二监测数据集合;Obtain a first monitoring data set corresponding to each smart manhole cover in the smart manhole cover network, and based on the first monitoring data set, obtain a second monitoring data set associated with a past smart manhole cover sequence of each sewer health status in the sewer health status set; 基于所述第二监测数据集合构建第一网络调校模板,根据初始神经网络对所述第一网络调校模板进行数据推理,获得所述第一网络调校模板中的屏蔽部分对应的推理置信度;所述第一网络调校模板包含触发调校模板和未触发调校模板;constructing a first network calibration template based on the second monitoring data set, performing data inference on the first network calibration template according to the initial neural network, and obtaining the inference confidence corresponding to the shielded part in the first network calibration template; the first network calibration template includes a triggered calibration template and a non-triggered calibration template; 基于所述初始神经网络,对所述第一网络调校模板和所述第一网络调校模板对应的第一触发调校模板进行共性度量,获得第一模板共性度量结果;Based on the initial neural network, performing commonality measurement on the first network adjustment template and a first trigger adjustment template corresponding to the first network adjustment template to obtain a first template commonality measurement result; 基于所述推理置信度、所述第一网络调校模板中的屏蔽部分,以及所述第一模板共性度量结果,对所述初始神经网络进行调校,获得调校后的初始神经网络;Based on the reasoning confidence, the shielded part in the first network calibration template, and the first template commonality measurement result, the initial neural network is calibrated to obtain a calibrated initial neural network; 基于所述第二监测数据集合构建第二网络调校模板,基于所述调校后的初始神经网络,对所述第二网络调校模板和所述第二网络调校模板对应的第二触发调校模板进行共性度量,获得第二模板共性度量结果;所述第二网络调校模板不具有触发调校模板;constructing a second network tuning template based on the second monitoring data set, and performing commonality measurement on the second network tuning template and a second trigger tuning template corresponding to the second network tuning template based on the adjusted initial neural network to obtain a second template commonality measurement result; the second network tuning template does not have a trigger tuning template; 基于所述第二模板共性度量结果对所述调校后的初始神经网络进行优化,获得优化后的初始神经网络;所述优化后的初始神经网络用于生成智慧井盖的初始编码特征。Based on the commonality measurement result of the second template, the adjusted initial neural network is optimized to obtain an optimized initial neural network; the optimized initial neural network is used to generate initial coding features of the smart manhole cover. 2.根据权利要求1所述的方法,其特征在于,所述获取智慧井盖组网中的各个智慧井盖对应的第一监测数据集合,基于所述第一监测数据集合,获取下水道健康状态集合中的各个下水道健康状态的过往智慧井盖序列关联的第二监测数据集合,包括:2. The method according to claim 1 is characterized in that the step of obtaining a first monitoring data set corresponding to each smart manhole cover in the smart manhole cover network, and obtaining a second monitoring data set associated with a past smart manhole cover sequence of each sewer health status in the sewer health status set based on the first monitoring data set, comprises: 获取智慧井盖组网中的各个智慧井盖对应的水流监测数据、有害气体监测数据以及噪音监测数据,将相同智慧井盖对应的水流监测数据、有害气体监测数据以及噪音监测数据进行融合,获得各个智慧井盖对应的第一监测数据集合;Obtain water flow monitoring data, harmful gas monitoring data, and noise monitoring data corresponding to each smart manhole cover in the smart manhole cover network, merge the water flow monitoring data, harmful gas monitoring data, and noise monitoring data corresponding to the same smart manhole cover, and obtain a first monitoring data set corresponding to each smart manhole cover; 获取下水道健康状态集合中的各个下水道健康状态对应的过往智慧井盖序列,基于所述过往智慧井盖序列中涵盖的智慧井盖对应的预警触发时刻,对所述过往智慧井盖序列中涵盖的智慧井盖按照递减次序排列,获得整理后的过往智慧井盖序列;Obtain a past smart manhole cover sequence corresponding to each sewer health state in the sewer health state set, and based on the warning triggering moments corresponding to the smart manhole covers included in the past smart manhole cover sequence, arrange the smart manhole covers included in the past smart manhole cover sequence in descending order to obtain a sorted past smart manhole cover sequence; 对所述整理后的过往智慧井盖序列中涵盖的智慧井盖对应的第一监测数据集合进行组合,获得各个下水道健康状态的过往智慧井盖序列关联的第二监测数据集合。The first monitoring data sets corresponding to the smart manhole covers included in the sorted past smart manhole cover sequence are combined to obtain a second monitoring data set associated with the past smart manhole cover sequence of each sewer health status. 3.根据权利要求1所述的方法,其特征在于,所述基于所述第二监测数据集合构建第一网络调校模板,包括:3. The method according to claim 1, wherein constructing a first network calibration template based on the second monitoring data set comprises: 基于数据筛选比值确定所述第二监测数据集合中的屏蔽部分,将所述屏蔽部分的数据项作为候选监测数据集合;Determine a shielded portion in the second monitoring data set based on the data screening ratio, and use the data items of the shielded portion as a candidate monitoring data set; 对所述第二监测数据集合中的候选监测数据集合进行数据屏蔽操作,获得基础调校模板,为所述基础调校模板添加指示标记,获得第一网络调校模板。Perform a data masking operation on the candidate monitoring data set in the second monitoring data set to obtain a basic calibration template, add an indication mark to the basic calibration template, and obtain a first network calibration template. 4.根据权利要求3所述的方法,其特征在于,所述对所述第二监测数据集合中的候选监测数据集合进行数据屏蔽操作,获得基础调校模板,包括:4. The method according to claim 3, characterized in that the step of performing a data masking operation on the candidate monitoring data set in the second monitoring data set to obtain a basic calibration template comprises: 基于第一屏蔽支持度和第二屏蔽支持度,对所述第二监测数据集合中的候选监测数据集合进行数据屏蔽操作,获得基础调校模板;其中,所述第一屏蔽支持度表示将所述第二监测数据集合中的候选监测数据集合变更成占位符的可能性程度,所述第二屏蔽支持度表示将所述第二监测数据集合中的候选监测数据集合变更成随机数据的可能性程度。Based on the first shielding support and the second shielding support, a data shielding operation is performed on the candidate monitoring data set in the second monitoring data set to obtain a basic adjustment template; wherein the first shielding support represents the possibility of changing the candidate monitoring data set in the second monitoring data set into a placeholder, and the second shielding support represents the possibility of changing the candidate monitoring data set in the second monitoring data set into random data. 5.根据权利要求1所述的方法,其特征在于,所述根据初始神经网络对所述第一网络调校模板进行数据推理,获得所述第一网络调校模板中的屏蔽部分对应的推理置信度,包括:5. The method according to claim 1, characterized in that the step of performing data reasoning on the first network calibration template according to the initial neural network to obtain the reasoning confidence corresponding to the shielded part in the first network calibration template comprises: 将所述第一网络调校模板加载到所述初始神经网络,基于所述初始神经网络对所述第一网络调校模板进行特征向量表示,获得所述第一网络调校模板对应的第一模板特征向量表示;Loading the first network calibration template into the initial neural network, performing feature vector representation on the first network calibration template based on the initial neural network, and obtaining a first template feature vector representation corresponding to the first network calibration template; 基于所述第一模板特征向量表示对所述第一网络调校模板中的屏蔽部分进行数据推理,获得所述第一网络调校模板中的屏蔽部分对应的推理置信度。Based on the first template feature vector representation, data inference is performed on the shielded part in the first network calibration template to obtain the inference confidence corresponding to the shielded part in the first network calibration template. 6.根据权利要求5所述的方法,其特征在于,所述基于所述初始神经网络,对所述第一网络调校模板和所述第一网络调校模板对应的第一触发调校模板进行共性度量,获得第一模板共性度量结果,包括:6. The method according to claim 5, characterized in that the step of performing commonality measurement on the first network adjustment template and the first trigger adjustment template corresponding to the first network adjustment template based on the initial neural network to obtain the first template commonality measurement result comprises: 将所述第一网络调校模板对应的下一个触发目标状态的智慧井盖的第一监测数据集合作为第一触发调校模板;所述第一网络调校模板和所述第一触发调校模板属于相同下水道健康状态的过往触发过目标状态的智慧井盖;The first monitoring data of the next smart manhole cover that triggers the target state corresponding to the first network adjustment template is collected as the first trigger adjustment template; the first network adjustment template and the first trigger adjustment template belong to the smart manhole cover that has triggered the target state in the past and has the same sewer health state; 将所述第一触发调校模板加载到所述初始神经网络,基于所述初始神经网络中的特征提取组件对所述第一触发调校模板进行特征向量表示,获得所述第一触发调校模板对应的第二模板特征向量表示;Loading the first trigger adjustment template into the initial neural network, performing feature vector representation on the first trigger adjustment template based on a feature extraction component in the initial neural network, and obtaining a second template feature vector representation corresponding to the first trigger adjustment template; 获取所述第一触发调校模板关联的下水道健康状态关联的第一下水道健康状态特征向量表示,基于所述第一下水道健康状态特征向量表示、所述第二模板特征向量表示,以及所述第一网络调校模板对应的第一模板特征向量表示,获得第一模板共性度量结果。Obtain a first sewer health state feature vector representation associated with the sewer health state associated with the first trigger adjustment template, and obtain a first template commonality measurement result based on the first sewer health state feature vector representation, the second template feature vector representation, and the first template feature vector representation corresponding to the first network adjustment template. 7.根据权利要求6所述的方法,其特征在于,所述第一网络调校模板的数量为x个,x≥1;所述基于所述第一下水道健康状态特征向量表示、所述第二模板特征向量表示,以及所述第一网络调校模板对应的第一模板特征向量表示,获得第一模板共性度量结果,包括:7. The method according to claim 6 is characterized in that the number of the first network calibration templates is x, x≥1; the first template commonality measurement result is obtained based on the first sewer health status feature vector representation, the second template feature vector representation, and the first template feature vector representation corresponding to the first network calibration template, comprising: 获取所述第一下水道健康状态特征向量表示和所述第二模板特征向量表示之间的第一共性度量结果,对所述第一共性度量结果进行指数变换,获得第一临时共性度量结果;Acquire a first commonality measurement result between the first sewer health state feature vector representation and the second template feature vector representation, perform an exponential transformation on the first commonality measurement result, and obtain a first temporary commonality measurement result; 获取所述第一下水道健康状态特征向量表示和各个第一网络调校模板对应的第一模板特征向量表示之间的第二共性度量结果,对所述第二共性度量结果进行指数变换,获得各个第一网络调校模板相关的第二临时共性度量结果;Obtaining a second commonality measurement result between the first sewer health state feature vector representation and the first template feature vector representation corresponding to each first network calibration template, performing an exponential transformation on the second commonality measurement result, and obtaining a second temporary commonality measurement result related to each first network calibration template; 对各个第一网络调校模板相关的第二临时共性度量结果进行求和处理,获得共性度量和值,基于所述第一临时共性度量结果与所述共性度量和值之间的比,确定第一模板共性度量结果。The second temporary commonality measurement results related to each first network calibration template are summed to obtain a commonality measurement sum value, and the first template commonality measurement result is determined based on a ratio between the first temporary commonality measurement result and the commonality measurement sum value. 8.根据权利要求1所述的方法,其特征在于,所述第一网络调校模板中的屏蔽部分为y个,y≥1;所述基于所述推理置信度、所述第一网络调校模板中的屏蔽部分,以及所述第一模板共性度量结果,对所述初始神经网络进行调校,获得调校后的初始神经网络,包括:8. The method according to claim 1, characterized in that the number of shielded parts in the first network calibration template is y, y≥1; the calibrating the initial neural network based on the inference confidence, the shielded parts in the first network calibration template, and the first template commonality measurement result to obtain the calibrated initial neural network comprises: 对y个屏蔽部分对应的推理置信度进行对数变换,获得各个屏蔽部分的可能性程度变换结果,将各个屏蔽部分的可能性程度变换结果进行求和处理,获得求和结果,作为屏蔽数据预测误差;Performing logarithmic transformation on the inference confidences corresponding to the y shielded parts to obtain the transformation results of the likelihood degree of each shielded part, summing up the transformation results of the likelihood degree of each shielded part to obtain the summed result as the shielded data prediction error; 基于所述屏蔽数据预测误差和所述第一模板共性度量结果,确定所述初始神经网络对应的网络调校误差;Determining a network calibration error corresponding to the initial neural network based on the masked data prediction error and the first template commonality measurement result; 基于所述网络调校误差,对所述初始神经网络的参变量进行反复修正,在所述网络调校误差符合预设的收敛要求时停止优化,获得调校后的初始神经网络;Based on the network calibration error, the parameters of the initial neural network are repeatedly corrected, and when the network calibration error meets the preset convergence requirement, the optimization is stopped to obtain the calibrated initial neural network; 所述基于所述第二监测数据集合构建第二网络调校模板,基于所述调校后的初始神经网络,对所述第二网络调校模板和所述第二网络调校模板对应的第二触发调校模板进行共性度量,获得第二模板共性度量结果,包括:The step of constructing a second network calibration template based on the second monitoring data set, and performing commonality measurement on the second network calibration template and a second trigger calibration template corresponding to the second network calibration template based on the calibrated initial neural network to obtain a second template commonality measurement result includes: 在所述第二监测数据集合中加入指示标记,获得第二网络调校模板,确定所述第二网络调校模板对应的第二触发调校模板;adding an indication mark to the second monitoring data set, obtaining a second network adjustment template, and determining a second trigger adjustment template corresponding to the second network adjustment template; 基于所述初始神经网络获取所述第二网络调校模板对应的第三模板特征向量表示,基于所述初始神经网络获取所述第二触发调校模板对应的第四模板特征向量表示;Acquire a third template feature vector representation corresponding to the second network adjustment template based on the initial neural network, and acquire a fourth template feature vector representation corresponding to the second trigger adjustment template based on the initial neural network; 获取所述第二触发调校模板关联的下水道健康状态关联的第二下水道健康状态特征向量表示,基于所述第二下水道健康状态特征向量表示、所述第三模板特征向量表示,以及所述第四模板特征向量表示,获得第二模板共性度量结果;Obtaining a second sewer health state feature vector representation associated with the sewer health state associated with the second trigger adjustment template, and obtaining a second template commonality measurement result based on the second sewer health state feature vector representation, the third template feature vector representation, and the fourth template feature vector representation; 所述方法还包括:The method further comprises: 在所述智慧井盖组网中的各个智慧井盖对应的第一监测数据集合中加入指示标记,获得所述智慧井盖组网中的各个智慧井盖对应的智慧井盖输入数据;Adding an indication mark to the first monitoring data set corresponding to each smart manhole cover in the smart manhole cover network, and obtaining the smart manhole cover input data corresponding to each smart manhole cover in the smart manhole cover network; 将所述智慧井盖组网中的第一智慧井盖对应的智慧井盖输入数据加载到所述优化后的初始神经网络,基于所述优化后的初始神经网络获取所述第一智慧井盖对应的智慧井盖特征向量表示;Loading the smart manhole cover input data corresponding to the first smart manhole cover in the smart manhole cover network into the optimized initial neural network, and obtaining the smart manhole cover feature vector representation corresponding to the first smart manhole cover based on the optimized initial neural network; 获取所述智慧井盖组网中的各个智慧井盖对应的智慧井盖特征向量表示,将各个智慧井盖对应的智慧井盖特征向量表示加入智慧井盖编码特征库。The smart manhole cover feature vector representation corresponding to each smart manhole cover in the smart manhole cover network is obtained, and the smart manhole cover feature vector representation corresponding to each smart manhole cover is added to the smart manhole cover coding feature library. 9.根据权利要求8所述的方法,其特征在于,所述方法还包括:9. The method according to claim 8, characterized in that the method further comprises: 基于所述智慧井盖编码特征库中的智慧井盖特征向量表示,以及所述下水道健康状态集合中的各个下水道健康状态对应的过往智慧井盖序列,构建状态触发预警网络的第三网络调校模板;Based on the smart manhole cover feature vector representation in the smart manhole cover coding feature library and the past smart manhole cover sequences corresponding to the sewer health states in the sewer health state set, a third network calibration template of the state triggering warning network is constructed; 基于所述第三网络调校模板对所述状态触发预警网络进行调校,获得调校后的状态触发预警网络;Adjust the state trigger warning network based on the third network adjustment template to obtain an adjusted state trigger warning network; 其中,所述状态触发预警网络的调试过程包括第一过程和第二过程,所述第一过程用于对序列预测学习进行调校,中止对智慧井盖特征向量表示进行调校;所述第二过程用于对智慧井盖特征向量表示进行调校,中止对所述序列预测学习进行调校;所述调校后的状态触发预警网络用于进行状态触发预警。Among them, the debugging process of the state-triggered warning network includes a first process and a second process. The first process is used to calibrate the sequence prediction learning and terminate the calibration of the smart manhole cover feature vector representation; the second process is used to calibrate the smart manhole cover feature vector representation and terminate the calibration of the sequence prediction learning; the calibrated state-triggered warning network is used for state-triggered warning. 10.一种计算机装置,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至9任一项所述方法中的步骤。10. A computer device comprising a memory and a processor, wherein the memory stores a computer program executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 9 when executing the program. 11.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至9任一项所述方法中的步骤。11. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 9 are implemented.
CN202411650649.8A 2024-11-19 2024-11-19 Intelligent well lid data analysis method and device based on Internet of things and storage medium Active CN119155335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411650649.8A CN119155335B (en) 2024-11-19 2024-11-19 Intelligent well lid data analysis method and device based on Internet of things and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411650649.8A CN119155335B (en) 2024-11-19 2024-11-19 Intelligent well lid data analysis method and device based on Internet of things and storage medium

Publications (2)

Publication Number Publication Date
CN119155335A true CN119155335A (en) 2024-12-17
CN119155335B CN119155335B (en) 2025-02-14

Family

ID=93815819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411650649.8A Active CN119155335B (en) 2024-11-19 2024-11-19 Intelligent well lid data analysis method and device based on Internet of things and storage medium

Country Status (1)

Country Link
CN (1) CN119155335B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN116852348A (en) * 2023-06-01 2023-10-10 中国航空油料集团有限公司 Well lid positioning method, device and system
US20230358879A1 (en) * 2020-11-10 2023-11-09 Robert Bosch Gmbh Method for monitoring surroundings of a first sensor system
CN117421643A (en) * 2023-12-18 2024-01-19 贵州省环境工程评估中心 Ecological environment remote sensing data analysis method and system based on artificial intelligence
CN117990162A (en) * 2024-04-03 2024-05-07 河北工程大学 A manhole monitoring device and method based on convolutional neural network
CN118013382A (en) * 2024-01-31 2024-05-10 成都青羊殊德中西医门诊有限公司 A smart medical management system based on neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
US20230358879A1 (en) * 2020-11-10 2023-11-09 Robert Bosch Gmbh Method for monitoring surroundings of a first sensor system
CN116852348A (en) * 2023-06-01 2023-10-10 中国航空油料集团有限公司 Well lid positioning method, device and system
CN117421643A (en) * 2023-12-18 2024-01-19 贵州省环境工程评估中心 Ecological environment remote sensing data analysis method and system based on artificial intelligence
CN118013382A (en) * 2024-01-31 2024-05-10 成都青羊殊德中西医门诊有限公司 A smart medical management system based on neural network
CN117990162A (en) * 2024-04-03 2024-05-07 河北工程大学 A manhole monitoring device and method based on convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
任小华;种兰祥;杨建锋;: "基于FT_BP神经网络的学业预警模型", 计算机应用研究, no. 1, 30 June 2020 (2020-06-30), pages 93 - 95 *
查旭东 等: "道路窨井盖- 井周路面的病害处治与智慧检测监管综述", 《中国公路学报》, 10 September 2024 (2024-09-10), pages 1 - 31 *
汪海波 等: "基于多模态体验的城市井盖设计研究", 合肥学院学报, vol. 40, no. 5, 31 October 2023 (2023-10-31), pages 111 - 118 *

Also Published As

Publication number Publication date
CN119155335B (en) 2025-02-14

Similar Documents

Publication Publication Date Title
CN118246744B (en) A risk assessment method and system for extra-long tunnel construction site
CN106250442A (en) The feature selection approach of a kind of network security data and system
US20210397973A1 (en) Storage medium, optimum solution acquisition method, and optimum solution acquisition apparatus
CN118052558B (en) Wind control model decision method and system based on artificial intelligence
Miró-Nicolau et al. A comprehensive study on fidelity metrics for XAI
CN117093919B (en) Geotechnical engineering geological disaster prediction method and system based on deep learning
CN118151020B (en) Method and system for detecting safety performance of battery
CN117933497B (en) TSA-ARIMA-CNN-based enterprise carbon emission prediction method
CN118606855A (en) Soil contamination detection method and device based on artificial intelligence
CN118312157A (en) A program development assistance method based on generative AI
CN118210082A (en) Short-time strong convection precipitation prediction method based on ResT and Unet fusion
CN119669883B (en) Semiconductor automated testing method and testing equipment based on artificial intelligence
CN119623531B (en) Supervised time series water level data generation method, system and storage medium
CN119419727A (en) Power load forecasting and optimization method based on artificial intelligence
CN117521063A (en) Malware detection method and device based on residual neural network and combined with transfer learning
CN119250476B (en) A method for water resource allocation in reservoir irrigation areas based on big data analysis
CN120277571A (en) Construction method of multidimensional time sequence anomaly detection system
CN120107626A (en) A three-level interactive fusion graph similarity learning method
CN119155335B (en) Intelligent well lid data analysis method and device based on Internet of things and storage medium
CN118965223A (en) Abnormal diagnosis method and system for oil-free dry vacuum unit
CN118656640A (en) Meteorological report generation method and system based on deep learning
CN117668528A (en) Natural gas voltage regulator fault detection method and system based on Internet of things
CN119443424B (en) A flood prevention and early warning method for reservoir irrigation areas based on big data analysis
CN119416303B (en) Method and system for predicting landslide surge height based on sparrow optimization random forest model
CN119669475B (en) Data classification grading method and system based on large model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载