bannerbanner
The fourth industrial revolution glossarium: over 1500 of the hottest terms you will use to create the future
The fourth industrial revolution glossarium: over 1500 of the hottest terms you will use to create the future

Полная версия

The fourth industrial revolution glossarium: over 1500 of the hottest terms you will use to create the future

Настройки чтения
Размер шрифта
Высота строк
Поля
На страницу:
7 из 13

Сonstructed language (Also conlang) is a language whose phonology, grammar, and vocabulary are consciously devised, instead of having developed naturally. Constructed languages may also be referred to as artificial, planned, or invented languages.


Сonvolutional neural network – in deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. Сonvolutional neural network is a class of artificial neural network most commonly used to analyze visual images. They are also known as Invariant or Spatial Invariant Artificial Neural Networks (SIANN) based on an architecture with a common weight of convolution kernels or filters that slide over input features and provide equivalent translation responses known as feature maps.

«D»

Dashboard is a panel that shows operational and instrument vitals/readings and that helps process experts to monitor the most important production KPIs in one central point of access. It enables manufacturers to track and optimize the production quality and is a valuable analytics tool to manage all related manufacturing costs efficiently313.


Data Access is the authorized, on-demand ability to access, modify or edit selected data, regardless of location. Data Access is one of the main aspects of establishing successful data governance systems314.


Data Altruism – term used in the Data Governance Act. Data that is made available without reward for purely non-commercial usage that benefits communities or society at large, such as the use of mobility data to improve local transport315.


Data analytics is the science of analyzing raw data to make conclusions about that information. Many of the techniques and processes of data analytics have been automated into mechanical processes and algorithms that work over raw data for human consumption316.


Data Architecture is a discipline, process, and program focusing on integrating sets of information. One of the four Enterprise Architectures (with Application Architecture, Business Architecture, and System Architecture)317.


Data at Rest is stored data that is not processed or transferred318.


Data Center is a facility composed of networked computers, storage systems and computing infrastructure that organizations use to assemble, process, store and disseminate large amounts of data. A business typically relies heavily on the applications, services and data contained within a data center, making it a critical asset for everyday operations. Also, Data Center is a facility that contains connected equipment for computing resources319,320.


Data Controller (or Controller) – the natural or legal person, or any other body, which alone or jointly with others determines the purposes and means of the processing of personal data. In a clinical trial, the organisation (s) responsible for the trial is usually considered being the controller321.


Data controller is a person, company, or other body that determines the purpose and means of personal data processing (this can be determined alone, or jointly with another person/company/body)322.


Data Curation is the organization and integration of data collected from various sources and it involves capturing, appraisal, description, preservation, access, use and reuse, and sharing of research data323.


Data Destruction – operation that results in the permanent, unrecoverable removal of information about an object from memory or storage (e.g., by multiple overwrites with a series of random bits)324.


Data Dictionary — database about data and database structures. A catalog of all data elements, containing their names, structures, and information about their usage, for the benefit of programmers and others interested in the data elements and their usage325.


Data Donation is research in which people voluntarily contribute their own personal data that was generated for a different purpose to a collective dataset326.


Data Donator — person donating personal data (may have the option to provide his/her email, signing the data with a private key, and restricting the allowed usage of the provided data)327.


Data Economy is a global digital ecosystem, that enables free movement of data within the EU. Furthermore, data enables optimization and decision-making processes as well as innovations in a wide range of areas. Also, Data Economy refers to the utilization of digital data in commercial transactions328,329.


Data Element – the smallest piece of information considered meaningful and usable. A single logical data fact, the basic building block of a Logical Data Model330.


Data Enrichment is the process of augmenting collected raw data or processed data with existing data or domain knowledge to enhance the analytic process331.


Data entry – the process of converting verbal or written responses to electronic form332.


Data for social science, is generally numeric files originating from social research methodologies or administrative records, from which statistics are produced333.


Data Governance is a system of decision rights and accountabilities for information-related processes, executed according to agreed-upon models which describe who can take what actions with what information, and when, under what circumstances, using what methods334.


Data Governance Methodology is a logical structure providing step-by-step instructions for performing Data Governance processes335.


Data Governance Office is a centralized organizational entity responsible for facilitating and coordinating Data Governance and/or Stewardship efforts for an organization. It supports a decision-making group, such as a Data Stewardship Council336.


Data in Motion is information that’s transferred from one location to another337.


Data in Use is information that’s being processed338.


Data Integrity proves that data hasn’t been tampered with, altered, or destroyed in an unauthorized way339.


Data is a public good. This concept allows open use of non-personal data340.


Data lakes are centralized repositories of structured and unstructured data at any scale. Data is stored without having to first structure the data and then run different types of analytics. Data lakes are typically «cold storage» so not ideal for high performance direct IO applications341.


Data Linkage – technique that involves bringing together and analyzing data from a variety of sources, typically data that relates to the same individual342.


Data literacy – the ability to derive meaningful information from data, just as literacy in general is the ability to derive information from the written word. The complexity of data analysis, especially in the context of big data, means that data literacy requires some knowledge of mathematics and statistics343.


Data Management is all the disciplines related to managing data as a valuable resource, such as data modeling or metadata management344.


Data management plan (DMP) is a formal document that outlines the creation, management, sharing, and preservation of data, both during and after a research project. Many funding agencies require researchers prepare a DMP as part of funding proposals345.


Data Mapping – the process of assigning a source data element to a target data element346.


Data markup is the stage of processing structured and unstructured data, during which data (including text documents, photo and video images) are assigned identifiers that reflect the type of data (data classification), and (or) data is interpreted to solve a specific problem, in including using machine learning methods (National Strategy for the Development of Artificial Intelligence for the period up to 2030).


Data mining is the process of data analysis and information extraction from large amounts of datasets with machine learning, statistical approaches. and many others. Data mining is the process of finding anomalies, patterns and correlations within large data sets to predict outcomes. Using a broad range of techniques, you can use this information to increase revenues, cut costs, improve customer relationships, reduce risks and more. Also, Data mining is the process of turning raw data into useful information by using software to look for meaningful patterns347,348,349.


Data modeling is the process of creating a simplified diagram of a software system and the data elements it contains, using text and symbols to represent the data and how it flows. Data models provide a blueprint for designing a new database or reengineering a legacy application. Overall, data modeling helps an organization use its data effectively to meet business needs for information350


Data portability allows individuals to obtain and reuse their personal data for their own purposes across different services. It allows them to move, copy or transfer personal data easily from one IT environment to another in a safe and secure way, without affecting its usability351.


Data Privacy – the assurance that a persons or organizations personal and private information is not inappropriately disclosed. Ensuring Data Privacy requires Access Management, eSecurity, and other data protection efforts352.


Data Processing within the field of information technology, typically means the processing of information by machines. Data processing is defined by procedures designed to make a data collection easier to use, ensure its accuracy, enhance its utility, optimize its format, protect confidentiality, etc. For archival purposes, the process and results of data processing must be systematically and comprehensively captured so that the process applied to the data is transparent to users353.


Data Processor (or Processor) – the natural or legal person, or any other body, which processes personal data on behalf of the controller354.


Data Protection Authority monitors and supervises, through investigative and corrective powers, the application of the data protection law. It provides expert advice on data protection issues and handle complaints that may have breached the law355.


Data protection is the process of protecting data and involves the relationship between the collection and dissemination of data and technology, the public perception and expectation of privacy and the political and legal underpinnings surrounding that data. It aims to strike a balance between individual privacy rights while still allowing data to be used for business purposes356.


Data Protection Officer ensures that the organisation processes the personal data of its staff, customers, providers or any other individuals (also referred to as data subjects) in compliance with the applicable data protection rules357.


Data Requestor – person or institution that is looking for data and provides the necessary infrastructure, e.g. a publicly available Semantic Container initialized with a semantic description of the data request and intended purpose of the collected data358.


Data Science is a broad grouping of mathematics, statistics, probability, computing, data visualization to extract knowledge from a heterogeneous set of data (images, sound, text, genomic data, social network links, physical measurements, etc.). The methods and tools derived from artificial intelligence are part of this family. Data science is the field of study that combines domain expertise, programming skills, and knowledge of mathematics and statistics to extract meaningful insights from data. Data science practitioners apply machine learning algorithms to numbers, text, images, video, audio, and more to produce artificial intelligence (AI) systems to perform tasks that ordinarily require human intelligence. In turn, these systems generate insights which analysts and business users can translate into tangible business value. Data Science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Also, Data Science this is an academic/professional field that comprises several components for data analysis and interpretation through mathematics, statistics and information technology. Thus, a data scientist not only collects and analyzes inputs, but also interprets and relates the facts to the context in which they are inserted359,360,361.


Data set is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as for example height and weight of an object, for each member of the data set. Data sets can also consist of a collection of documents or files. Data set a collection of data records. In the SAS statistical software, a «SAS data set» is the internal representation of data. Also, Data set is a set of data that has undergone preliminary preparation (processing) in accordance with the requirements of the legislation of the Russian Federation on information, information technology and information protection and is necessary for the development of software based on artificial intelligence (National strategy for the development of artificial intelligence for the period up to 2030)362,363.


Data Sharing – the disclosure of data from one or more organizations to a third party organisation or organizations, or the sharing of data between different parts of an organisation364.


Data Sharing Agreement – common set of rules to be adopted by the various organizations involved in a data sharing operation365.


Data sharing governance – concept changing «ownership’ of data-to-data control and data sharing governance366.


Data silos are repositories of fixed data that remain under the control of one group or department and that are isolated from the rest of the organization367.


Data source is the primary location where the data that is being used comes from368.


Data Stakeholders – those who use, affect, or are affected by data. Data Stakeholders may be upstream producers, gatherers, or acquirers of information; downstream consumers of information, those who manage, transform, or store data, or those who set policies, standards, architectures, or other requirements or constraints369.


Data Steward is a person with data-related responsibilities as set by a Data Governance or Data Stewardship program. Often, Data Stewards fall into multiple types. Data Quality Stewards, Data Definition Stewards, Data Usage Stewards, etc.370.


Data Subject is the person whose personal data are collected, held or processed. identified or identifiable natural person, who is the subject of personal data371.


Data transfer rate (DTR) is the amount of digital data that is moved from one place to another in a given time. The data transfer rate can be viewed as the speed of travel of a given amount of data from one place to another. In general, the greater the bandwidth of a given path, the higher the data transfer rate372.


Data variability describes how far apart data points lie from each other and from the center of a distribution. Along with measures of central tendency, measures of variability give you descriptive statistics that summarize your data373.


Data veracity is the degree of accuracy or truthfulness of a data set. In the context of big data, it’s not just the quality of the data that is important, but how trustworthy the source, the type, and processing of the data are374.


Database is an organized collection of structured information, or data, typically stored electronically in a computer system. A database is usually controlled by a database management system (DBMS). Together, the data and the DBMS, along with the applications that are associated with them, are referred to as a database system, often shortened to just database. Data within the most common types of databases in operation today is typically modeled in rows and columns in a series of tables to make processing and data querying efficient. The data can then be easily accessed, managed, modified, updated, controlled, and organized. Most databases use structured query language (SQL) for writing and querying data375.


Database management system (DBMS) is a software package designed to define, manipulate, retrieve and manage data in a database. A DBMS generally manipulates the data itself, the data format, field names, record structure and file structure. It also defines rules to validate and manipulate this data. Database management systems are set up on specific data handling concepts, as the practice of administrating a database evolves. The earliest databases only handled individual single pieces of specially formatted data. Today’s more evolved systems can handle different kinds of less formatted data and tie them together in more elaborate ways376.


Databus is a data-centric sharing system where applications exchange information in a virtual, global data space377.


Data-driven decisions are decisions made based on data/information, not experience, hunches, or intuition378.


Dataflow Processing Unit (DPU) is a programmable specialized electronic circuit with hardware accelerated data processing for data-oriented computing.


DDI instance an XML document, marked up according to the DDI DTD. In other words, a codebook or catalog record marked up in DDI-compliant XML379.


Debugging is the process of finding and resolving bugs (defects or problems that prevent correct operation) within computer programs, software, or systems. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, and profiling. Many programming languages and software development tools also offer programs to aid in debugging, known as debuggers380.


Decentralized applications (dApps) are digital applications or programs that exist and run on a blockchain or peer-to-peer (P2P) network of computers instead of a single computer. DApps (also called «dapps») are outside the purview and control of a single authority. DApps – which are often built on the Ethereum platform – can be developed for a variety of purposes including gaming, finance, and social media381.


Decentralized control is a process in which a significant number of control actions related to a given object are generated by the object itself on the basis of self-government.


Decentralized finance (DeFi) is an emerging financial technology based on secure distributed ledgers similar to those used by cryptocurrencies. The system removes the control banks and institutions have on money, financial products, and financial services382.


Decision intelligence (DI) is a practical discipline used to improve the decision making process by clearly understanding and programmatically developing how decisions are made and how the outcomes are evaluated, managed and improved through feedback. Also, Decision intelligence is a discipline offers a framework to assist data and analytics practitioners develop, model, align, implement, track, and modify decision models and processes related to business results and performance.


Decision Rights – the system of determining who makes a decision, and when, and how, and under what circumstances. Formalizing Decision Rights is a key function of Data Governance383.


Decision support system (DSS) is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance – i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both. Also, Decision Support Systems is a collection of integrated technologies, software and hardware, that constitute the main support of the organization`s decision making process384.


Decision tree is a tree-and-branch model used to represent decisions and their possible consequences, similar to a flowchart.


Decompression is a feature that is used to restore data to uncompressed form after compression385.


Deep Learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised. Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Also, Deep Learning (DL) is a subfield of machine learning concerned with algorithms that are inspired by the human brain that works in a hierarchical way. Deep Learning models, which are mostly based on the (artificial) neural networks, have been applied to different fields, such as speech recognition, computer vision, and natural language processing386.


Deep neural network – a multilayer network containing several (many) hidden layers of neurons between the input and output layers, which allows modeling complex nonlinear relationships. GNNs are now increasingly used to solve such artificial intelligence problems as speech recognition, natural language processing, computer vision, etc., including in robotics387.


Deep Technology (DEEP TECH) refers to a startup whose business idea is based on a scientific or otherwise extensive (deep) understanding of technology. The term has been adopted to set certain companies apart from other startups which are also technology driven. A deep tech company may, for instance, base the core of its operations on particularly complex mathematics in the creation of software algorithms. Deep technology companies typically comprise artificial intelligence companies, which try to replicate human thinking, build navigation systems for flying cars and so on388.

На страницу:
7 из 13