DI - KRG - Artículos de Revistas

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 42
  • Publication
    Agile Delphi methodology: A case study on how technology impacts burnout syndrome in the post-pandemic era
    (Frontiers, 2023-01-20) Medina Domínguez, Fuensanta; Sánchez Segura, María Isabel; Amescua Seco, Antonio de; Dugarte Peña, Germán Lenin; Villalba Arranz, Santiago; Comunidad de Madrid; Universidad Carlos III de Madrid
    Introduction: In the post-pandemic era, many habits in different areas of our lives have changed. The exponential growth in the use of technology to perform work activities is one of them. At the same time, there has been a marked increase in burnout syndrome. Is this a coincidence? Could they be two interconnected developments? What if they were? Can we use technology to mitigate this syndrome? This article presents the agile Delphi methodology (MAD), an evolved version of the Delphi method, adapted to the needs of modern-day society. Methods: To drive Occupational Health and Safety (OHS) experts to reach a consensus on what technological and non-technological factors could be causing the burnout syndrome experienced by workers in the post-pandemic era, MAD has been used in a specific case study. This study formally presents MAD and describes the stages enacted to run Delphi experiments agilely. Results: MAD is more efficient than the traditional Delphi methodology, reducing the time taken to reach a consensus and increasing the quality of the resulting products. Discussion: OHS experts identified factors that affect and cause an increase in burnout syndrome as well as mechanisms to mitigate their effects. The next step is to evaluate whether, as the experts predict, burnout syndrome decreases with the mechanisms identified in this case study.
  • Publication
    Point cloud voxel classification of aerial urban LiDAR using voxel attributes and random forest approach
    (Elservier, 2023-04-01) Aljumaily, Harith; Laefer, Debra F.; Cuadra Fernández, María Dolores; Velasco de Diego, Manuel
    The opportunities now afforded by increasingly available, dense, aerial urban LiDAR point clouds (greater than100 pts/m2) are arguably stymied by their sheer size, which precludes the effective use of many tools designed for point cloud data mining and classification. This paper introduces the point cloud voxel classification (PCVC) method, an automated, two-step solution for classifying terabytes of data without overwhelming the computational infrastructure. First, the point cloud is voxelized to reduce the number of points needed to be processed sequentially. Next, descriptive voxel attributes are assigned to aid in further classification. These attributes describe the point distribution within each voxel and the voxel's geo-location. These include 5 point-descriptors (density, standard deviation, clustered points, fitted plane, and plane's angle) and 2 voxel position attributes (elevation and neighbors). A random forest algorithm is then used for final classification of the object within each voxel using four categories: ground, roof, wall, and vegetation. The proposed approach was evaluated using a 297,126,417 point dataset from a 1 km2 area in Dublin, Ireland and 50% denser dataset of New York City of 13,912,692 points (150 m2). PCVC's main advantage is scalability achieved through a 99 % reduction in the number of points that needed to be sequentially categorized. Additionally, PCVC demonstrated strong classification results (precision of 0.92, recall of 0.91, and F1-score of 0.92) compared to previous work on the same data set (precision of 0.82-0.91, recall 0.86-0.89, and F1-score of 0.85-0.90).
  • Publication
    Scientific Programming Techniques and Algorithms for Data-Intensive Engineering Environments
    (2018-11-05) Alor Hernandez, Giner; Mejia Miranda, Jezreel; Álvarez Rodriguez, José María
  • Publication
    Machine Ethics: Do Androids Dream of Being Good People?
    (Springer, 2023-03-23) Génova Fuster, Gonzalo; Moreno Pelayo, Valentín; González Martín, M. Rosario; Comunidad de Madrid; Universidad Carlos III de Madrid
    Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating¿ and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely "following a moral code". In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.
  • Publication
    Towards a method to quantitatively measure toolchain interoperability in the engineering lifecycle: A case study of digital hardware design
    (Elservier, 2023-08-01) Álvarez Rodríguez, José María; Mendieta Zuniga, Roy Arturo; Cibrian Sánchez, Eduardo; Llorens Morillo, Juan Bautista; European Commission; Universidad Carlos III de Madrid
    The engineering lifecycle of cyber-physical systems is becoming more challenging than ever. Multiple engineering disciplines must be orchestrated to produce both a virtual and physical version of the system. Each engineering discipline makes use of their own methods and tools generating different types of work products that must be consistently linked together and reused throughout the lifecycle. Requirements, logical/descriptive and physical/analytical models, 3D designs, test case descriptions, product lines, ontologies, evidence argumentations, and many other work products are continuously being produced and integrated to implement the technical engineering and technical management processes established in standards such as the ISO/IEC/IEEE 15288:2015 "Systems and software engineering-System life cycle processes". Toolchains are then created as a set of collaborative tools to provide an executable version of the required technical processes. In this engineering environment, there is a need for technical interoperability enabling tools to easily exchange data and invoke operations among them under different protocols, formats, and schemas. However, this automation of tasks and lifecycle processes does not come free of charge. Although enterprise integration patterns, shared and standardized data schemas and business process management tools are being used to implement toolchains, the reality shows that in many cases, the integration of tools within a toolchain is implemented through point-to-point connectors or applying some architectural style such as a communication bus to ease data exchange and to invoke operations. In this context, the ability to measure the current and expected degree of interoperability becomes relevant: 1) to understand the implications of defining a toolchain (need of different protocols, formats, schemas and tool interconnections) and 2) to measure the effort to implement the desired toolchain. To improve the management of the engineering lifecycle, a method is defined: 1) to measure the degree of interoperability within a technical engineering process implemented with a toolchain and 2) to estimate the effort to transition from an existing toolchain to another. A case study in the field of digital hardware design comprising 6 different technical engineering processes and 7 domain engineering tools is conducted to demonstrate and validate the proposed method.
  • Publication
    A free mind cannot be digitally transferred
    (Springer, 2022-06-27) Génova Fuster, Gonzalo; Moreno Pelayo, Valentín; Parra Corredor, Eugenio; Comunidad de Madrid; Ministerio de Ciencia e Innovación (España); Universidad Carlos III de Madrid
    The digital transfer of the mind to a computer system (i.e., mind uploading) requires representing the mind as a finite sequence of bits (1s and 0s). The classic “stored-program computer” paradigm, in turn, implies the equivalence between program and data, so that the sequence of bits themselves can be interpreted as a program, which will be algorithmically executed in the receiving device. Now, according to a previous proof, on which this paper is based, a computational or algorithmic machine, however complex, cannot be free (in the sense of ‘self-determined’). Consequently, a finite sequence of bits cannot adequately represent a free mind and, therefore, a free mind cannot be digitally transferred, quod erat demonstrandum. The impossibility of making this transfer, as demonstrated here, should be a concern especially for those who wish to achieve it. Since we intend this to be a rigorous demonstration, we must give precise definitions and conditions of validity. The most important part of the paper is devoted to explaining the meaning and reasonableness of these definitions and conditions (for example that being truly free means being self-determined). Special attention is paid, also, to the philosophical implications of the demonstration. Finally, this thesis is distinguished from other closely related issues (such as other possible technological difficulties to “discretize” the mind; or, whether it is possible to transfer the mind from one material support to another one in a non-digital way).
  • Publication
    Smart occupational health and safety for a digital era and its place in smart and sustainable cities
    (AIMS Press, 2021-10-14) Sánchez Segura, María Isabel; Dugarte Peña, Germán Lenin; Amescua Seco, Antonio de; Medina Domínguez, Fuensanta; Comunidad de Madrid; Universidad Carlos III de Madrid
    As innovative technologies emerge, there is a need to evolve the environments in which these technologies are used. The trend has shifted from considering technology as a support service towards making it the means for transforming all complex systems. Smart cities focus their development on the use of technology to transform every aspect of society and embrace the complexity of these transformations towards something leading to the well-being and safety of people inhabiting these cities. Occupational Health and Safety (OHS) is an essential aspect to be considered in the design of a smart city and its digital ecosystems, however, it remains unconsidered in most smart city's frameworks, despite the need for a specific space for smart OHS. This paper summarizes a 9-month process of generation of a value proposition for evolving the sector of OHS based on a value-map in whose creation several stakeholders have participated. They focused on identifying the products, the methods, the organizational structures and the technologies required to develop an updated, dynamic and robust prevention model focused on workers in smart and complex contexts, and to improve the organizations' capability to guarantee safety even in the most changing, digital and disruptive settings. To assess the relevance and validity of this value-map, a study was carried out to match the set of its elements and its specific and conceptual products discovered, considering also the definition of the past needs and future trends of the sector that a set of renowned stakeholders and key opinion leaders (with mastery in OHS from several companies and industries) have recently defined for the decade of 2020. A prospective analysis of this match is presented, revealing that there is still an existing gap to be covered in the context of smart cities design: the explicit guarantee of safety for workers.
  • Publication
    A Survey on Energy Efficiency in Smart Homes and Smart Grids
    (MDPI, 2021-11-03) Prieto González, Lisardo; Fensel, Anna; Gómez Berbís, Juan Miguel; Popa, Ángela; Amescua Seco, Antonio de
    Empowered by the emergence of novel information and communication technologies (ICTs) such as sensors and high-performance digital communication systems, Europe has adapted its electricity distribution network into a modern infrastructure known as a smart grid (SG). The benefits of this new infrastructure include precise and real-time capacity for measuring and monitoring the different energy-relevant parameters on the various points of the grid and for the remote operation and optimization of distribution. Furthermore, a new user profile is derived from this novel infrastructure, known as a prosumer (a user that can produce and consume energy to/from the grid), who can benefit from the features derived from applying advanced analytics and semantic technologies in the rich amount of big data generated by the different subsystems. However, this novel, highly interconnected infrastructure also presents some significant drawbacks, like those related to information security (IS). We provide a systematic literature survey of the ICT-empowered environments that comprise SGs and homes, and the application of modern artificial intelligence (AI) related technologies with sensor fusion systems and actuators, ensuring energy efficiency in such systems. Furthermore, we outline the current challenges and outlook for this field. These address new developments on microgrids, and data-driven energy efficiency that leads to better knowledge representation and decision-making for smart homes and SGs
  • Publication
    Valuable business knowledge asset discovery by processing unstructured data
    (MDPI, 2022-10-02) Sánchez Segura, María Isabel; González Cruz, Roxana; Medina Domínguez, Fuensanta; Dugarte Peña, Germán Lenin; Ministerio de Ciencia e Innovación (España)
    Modern organizations are challenged to enact a digital transformation and improve their competitiveness while contributing to the ninth Sustainable Development Goal (SGD), 'Build resilient infrastructure, promote sustainable industrialization and foster innovation'. The discovery of hidden process data's knowledge assets may help to digitalize processes. Working on a valuable knowledge asset discovery process, we found a major challenge in that organizational data and knowledge are likely to be unstructured and undigitized, constraining the power of today's process mining methodologies (PMM). Whereas it has been proved in digitally mature companies, the scope of PMM becomes wider with the complement proposed in this paper, embracing organizations in the process of improving their digital maturity based on available data. We propose the C4PM method, which integrates agile principles, systems thinking and natural language processing techniques to analyze the behavioral patterns of organizational semi-structured or unstructured data from a holistic perspective to discover valuable hidden information and uncover the related knowledge assets aligned with the organization strategic or business goals. Those assets are the key to pointing out potential processes susceptible to be handled using PMM, empowering a sustainable organizational digital transformation. A case study analysis from a dataset containing information on employees' emails in a multinational company was conducted.
  • Publication
    Artificial neural network model to predict student performance using nonpersonal information
    (Frontiers Media, 2023-02-09) Chavez, Heyul; Chávez Arias, Bill; Contreras Rosas, Sebastián; Álvarez Rodríguez, José María; Raymundo, Carlos
    In recent years, artificial intelligence has played an important role in education, wherein one of the most commonly used applications is forecasting students¿ academic performance based on personal information such as social status, income, address, etc. This study proposes and develops an artificial neural network model capable of determining whether a student will pass a certain class without using personal or sensitive information that may compromise student privacy. For model training, we used information regarding 32,000 students collected from The Open University of the United Kingdom, such as number of times they took the course, average number of evaluations, course pass rate, average use of virtual materials per date and number of clicks in virtual classrooms. Attributes selected for the model are as follows: 93.81% accuracy, 94.15% precision, 95.13% recall, and 94.64% F1-score. These results will help the student authorities to take measures to avoid withdrawal and underachievement.
  • Publication
    Preface special issue on Educational Applications on the web of data: new trends and perspectives
    (Elsevier, 2018-06-01) Alor-Hernández, Giner; Álvarez Rodríguez, José María
  • Publication
    Coding vs presenting: a multicultural study on emotions
    (Emerald, 2020-08-13) Colomo Palacios, Ricardo; Casado Lumbreras, Cristina; Álvarez Rodríguez, José María; Yilmaz, Murat
    Purpose: The purpose of this paper is to explore and compare emotions perceived while coding and presenting for software students, comparing three different countries and performing also a gender analysis. Design/methodology/approach: Empirical data are gathered by means of the discrete emotions questionnaire, which was distributed to a group of students (n = 174) in three different countries: Norway, Spain and Turkey. All emotions are self-assessed by means of a Likert scale. Findings: The results show that both tasks are emotionally different for the subjects of all countries: presentation is described as a task that produces mainly fear and anxiety; whereas coding tasks produce anger and rage, but also happiness and satisfaction. With regards to gender differences, men feel less scared in presentation tasks, whereas women report more desire in coding activities. It is concluded that it is important to be aware and take into account the different emotions perceived by students in their activities. Moreover, it is also important to note the different intensities in these emotions present in different cultures and genders. Originality/value: This study is among the few to study emotions perceived in software work by means of a multicultural approach using quantitative research methods. The research results enrich computing literacy theory in human factors.
  • Publication
    Genetic algorithms: a practical approach to generate textual patterns for requirements authoring
    (MDPI, 2021-12-01) Poza Carrasco, Jesús Manuel; Moreno Pelayo, Valentín; Fraga Vázquez, Anabel; Álvarez Rodríguez, José María
    The writing of accurate requirements is a critical factor in assuring the success of a project. Text patterns are knowledge artifacts that are used as templates to guide engineers in the requirements authoring process. However, generating a text pattern set for a particular domain is a time-consuming and costly activity that must be carried out by specialists. This research proposes a method of automatically generating text patterns from an initial corpus of high-quality requirements, using genetic algorithms and a separate-and-conquer strategy to create a complete set of patterns. Our results show this method can generate a valid pattern set suitable for requirements authoring, outperforming existing methods by 233%, with requirements ratio values of 2.87 matched per pattern found; as opposed to 1.23 using alternative methods.
  • Publication
    Semantic recovery of traceability links between system artifacts
    (World Scientific, 2020-11-10) Álvarez Rodríguez, José María; Mendieta Zuniga, Roy Arturo; Moreno Pelayo, Valentín; Sánchez Puebla Rodríguez, Miguel Ángel; Llorens Morillo, Juan Bautista; European Commission
    This paper introduces a mechanism to recover traceability links between the requirements and logical models in the context of critical systems development. Currently, lifecycle processes are covered by a good number of tools that are used to generate different types of artifacts. One of the cornerstone capabilities in the development of critical systems lies in the possibility of automatically recovery traceability links between system artifacts generated in different lifecycle stages. To do so, it is necessary to establish to what extent two or more of these work products are similar, dependent or should be explicitly linked together. However, the different types of artifacts and their internal representation depict a major challenge to unify how system artifacts are represented and, then, linked together. That is why, in this work, a concept-based representation is introduced to provide a semantic and unified description of any system artifact. Furthermore, a traceability function is defined and implemented to exploit this new semantic representation and to support the recovery of traceability links between different types of system artifacts. In order to evaluate the traceability function, a case study in the railway domain is conducted to compare the precision and recall of recovery traceability links between text-based requirements and logical model elements. As the main outcome of this work, the use of a concept-based paradigm to represent that system artifacts are demonstrated as a building block to automatically recover traceability links within the development lifecycle of critical systems.
  • Publication
    Business information architecture for successful project implementation based on sentiment analysis in the tourist sector
    (Springer, 2019-06-28) Zapata, Gianpierre; Murga, Javier; Domínguez, Francisco; Martínez Moguerza, Javier; Álvarez Rodríguez, José María
    In the today's market, there is a wide range of failed IT projects in specialized small and medium-sized companies because of poor control in the gap between the business and its vision. In other words, acquired goods are not being sold, a scenario which is very common in tourism retail companies. These companies buy a number of travel packages from big companies and due to lack of demand for these packages, they expire, becoming an expense, rather than an investment. To solve this problem, we propose to detect the problems that limit a company by re-engineering the processes, enabling the implementation of a business architecture based on sentimental analysis, allowing small and medium-sized tourism enterprises (SMEs) to make better decisions and analyze the information that most possess, without knowing how to exploit it. In addition, a case study was applied using a real company, comparing data before and after using the proposed model in order to validate feasibility of the applied model.
  • Publication
    Enabling system artefact exchange and selection through a linked data layer
    (JUCS, 2018-01-01) Álvarez Rodríguez, José María; Mendieta Zuniga, Roy Arturo; Vara González, José Luis de la; Fraga Vázquez, Anabel; Llorens Morillo, Juan Bautista; European Commission; Ministerio de Economía y Competitividad (España)
    The use of different techniques and tools is a common practice to cover all stages in the systems development lifecycle, generating a very good number of system artefacts. Moreover, these artefacts are commonly encoded in different formats and can only be accessed, in most cases, through proprietary and non-standard protocols. This scenario can be considered a real nightmare for software or systems reuse. Possible solutions imply the creation of a real collaborative development environment where tools can exchange and share data, information and knowledge. In this context, the OSLC (Open Services for Lifecycle Collaboration) initiative pursues the creation of public specifications (data shapes) to exchange any artefact generated during the development lifecycle, by applying the principles of the Linked Data initiative. In this paper, the authors present a solution to provide a real multi-format system artefact reuse by means of an OSLC-based specification to share and exchange any artefact under the principles of the Linked Data initiative. Finally, two experiments are conducted to demonstrate the advantages of enabling an input/output interface based on an OSLC implementation on top of an existing commercial tool (the Knowledge Manager). Thus, it is possible to enhance the representation and retrieval capabilities of system artefacts by considering the whole underlying knowledge graph generated by the different system artefacts and their relationships. After performing 45 different queries over logical and physical models stored in Papyrus, IBM Rhapsody and Simulink, results of precision and recall are promising showing average values between 70-80%.
  • Publication
    Developments in Aerospace Software Engineering practices for VSEs: An overview of the process requirements and practicesof integrated Maturity models and Standards
    (Elsevier, 2021-10-01) Eito Brun, Ricardo; Amescua Seco, Antonio de
    As part of the evolution of the Space market in the last years – globally referred to as Space 2.0 - small companies are playing an increasingly relevant role in different aerospace projects. Business incubators established by European Space Agency (ESA) and similar entities are evidence of the need of moving initiatives to small companies characterized by greater flexibility to develop specific activities. Software is a key component in most aerospace projects, and the success of the initiatives and projects usually depends on the capability of developing reliable software following well-defined standards. But small entities face some difficulties when adopting software development standards that have been conceived thinking on larger organizations and big programs. The need of defining software development standards tailored to small companies and groups is a permanent subject of discussion not only in the aerospace field, and has led in recent years to the publication of the ISO/IEC 29110 series of systems and software engineering standards and guides, aimed to solve the issues that Very Small Entities (VSEs) () – settings having up to twenty-five people -, found with other standards like CMMI or SPICE. This paper discusses the tailoring defined by different aerospace organizations for VSEs in the aerospace industry, and presents a conceptual arrangement of the standard based on meta-modeling languages that allow the extension and full customization with the incorporation of specific software engineering requirements and practices from ECSS (European Cooperation for Space Standardization).
  • Publication
    Automatic classification of web images as UML static diagrams using machine learning techniques
    (MDPI, 2020-04-01) Moreno Pelayo, Valentín; Génova Fuster, Gonzalo; Alejandres Sánchez, Manuela; Fraga Vázquez, Anabel; European Commission; Ministerio de Economía y Competitividad (España)
    Our purpose in this research is to develop a method to automatically and efficiently classify web images as Unified Modeling Language (UML) static diagrams, and to produce a computer tool that implements this function. The tool receives a bitmap file (in different formats) as an input and communicates whether the image corresponds to a diagram. For pragmatic reasons, we restricted ourselves to the simplest kinds of diagrams that are more useful for automated software reuse: computer-edited 2D representations of static diagrams. The tool does not require that the images are explicitly or implicitly tagged as UML diagrams. The tool extracts graphical characteristics from each image (such as grayscale histogram, color histogram and elementary geometric forms) and uses a combination of rules to classify it. The rules are obtained with machine learning techniques (rule induction) from a sample of 19,000 web images manually classified by experts. In this work, we do not consider the textual contents of the images. Our tool reaches nearly 95% of agreement with manually classified instances, improving the effectiveness of related research works. Moreover, using a training dataset 15 times bigger, the time required to process each image and extract its graphical features (0.680 s) is seven times lower.
  • Publication
    OntoTouTra: tourist traceability ontology based on big data analytics
    (MDPI, 2021-11-22) Mendoza-Moreno, Juan Francisco; Santamaria Granados, Luz; Fraga Vázquez, Anabel; Ramírez González, Gustavo
    Tourist traceability is the analysis of the set of actions, procedures, and technical measures that allows us to identify and record the space–time causality of the tourist’s touring, from the beginning to the end of the chain of the tourist product. Besides, the traceability of tourists has implications for infrastructure, transport, products, marketing, the commercial viability of the industry, and the management of the destination’s social, environmental, and cultural impact. To this end, a tourist traceability system requires a knowledge base for processing elements, such as functions, objects, events, and logical connectors among them. A knowledge base provides us with information on the preparation, planning, and implementation or operation stages. In this regard, unifying tourism terminology in a traceability system is a challenge because we need a central repository that promotes standards for tourists and suppliers in forming a formal body of knowledge representation. Some studies are related to the construction of ontologies in tourism, but none focus on tourist traceability systems. For the above, we propose OntoTouTra, an ontology that uses formal specifications to represent knowledge of tourist traceability systems. This paper outlines the development of the OntoTouTra ontology and how we gathered and processed data from ubiquitous computing using Big Data analysis techniques
  • Publication
    Application of machine learning techniques to the flexible assessment and improvement of requirements quality
    (Springer, 2020-04-27) Moreno Pelayo, Valentín; Génova Fuster, Gonzalo; Parra Corredor, Eugenio; Fraga Vázquez, Anabel; European Commission; Ministerio de Economía y Competitividad (España)
    It is already common to compute quantitative metrics of requirements to assess their quality. However, the risk is to build assessment methods and tools that are both arbitrary and rigid in the parameterization and combination of metrics. Specifically, we show that a linear combination of metrics is insufficient to adequately compute a global measure of quality. In this work, we propose to develop a flexible method to assess and improve the quality of requirements that can be adapted to different contexts, projects, organizations, and quality standards, with a high degree of automation. The domain experts contribute with an initial set of requirements that they have classified according to their quality, and we extract their quality metrics. We then use machine learning techniques to emulate the implicit expert’s quality function. We provide also a procedure to suggest improvements in bad requirements. We compare the obtained rule-based classifiers with different machine learning algorithms, obtaining measurements of effectiveness around 85%. We show as well the appearance of the generated rules and how to interpret them. The method is tailorable to different contexts, different styles to write requirements, and different demands in quality. The whole process of inferring and applying the quality rules adapted to each organization is highly automated