{"id":504139,"date":"2021-12-02T09:00:00","date_gmt":"2021-12-02T09:00:00","guid":{"rendered":"https:\/\/www.capgemini.com\/?post_type=research-and-insight&p=641512"},"modified":"2025-03-27T07:02:55","modified_gmt":"2025-03-27T07:02:55","slug":"article-by-university-of-oxford","status":"publish","type":"research-and-insight","link":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/","title":{"rendered":"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford"},"content":{"rendered":"\n
\"Marta-Kwiatkowska\"<\/picture><\/div>
<\/div>
Intelligent industry<\/span><\/div><\/div>
\"capgemini-research-institute\"\/<\/div><\/div><\/div>

Article by University of Oxford<\/h1><\/div>

Building safer AI for the next era of transformation<\/h2>
Download report<\/span>13 MB pdf<\/span><\/a><\/div>
Download infographic<\/span>2 MB pdf<\/span><\/a><\/div><\/div><\/div><\/div><\/div><\/div><\/header>\n\n\n\n
\n

by Marta Kwiatkowska<\/strong>, Professor of Computing Systems, University of Oxford<\/h3>\n\n\n\n

<\/p>\n\n\n\n

Artificial intelligence (AI) plays a key role in modern society. It drives cars, detects images, understands natural language, and controls complex industrial machines. When compared with traditional human-controlled operations, AI tends to be more consistent. In the near future, AI applications will take on greater autonomy in military, engineering, and industrial applications. However, these decision-making systems have critical exploitable flaws, which, if not addressed, will inevitably lead to loss of economic benefits, human life and, ultimately, trust in the technology.<\/p>\n\n\n\n

The underlying method for building these AI systems is called deep neural networks (DNN). Loosely based on the neural networks in a human brain, they are vast and complex, but mathematically decipherable by normal human understanding. However, while mathematically transparent, logically they are \u201cblack boxes\u201d: they work, but we don\u2019t know how. If operators fail to remain vigilant, this fissure in our understanding of AI can expose it to adversarial exploitation.<\/p>\n\n\n\n

<\/div>\n\n\n\n

Breaking an AI system<\/h3>\n\n\n\n

Research has shown that very simple changes can drastically impact an AI model\u2019s outcomes, with potentially catastrophic consequences. Adversarial techniques[1] can fool the AI into misclassifying the input, even when the perturbation is minor. The Nexar Deep Learning Traffic Light Challenge, for example, has a database of 18,000 dashboard-camera images, to which the public has access and can contribute, to build AI models for traffic-light identification. The challenge is for researchers to use technology that can identify and label each image as either \u201cred,\u201d \u201cgreen,\u201d or \u201cnull\u201d (meaning no light has been detected). However, it requires only one inconsistent pixel to misguide the AI model into misclassifying the image in question, meaning that a red light can be recorded as green, or vice versa. \u201cMoreover, these false classifications are often made with a high degree of confidence that they are correct (sometimes as much as 95%).\u201d<\/p>\n\n\n\n

<\/div>\n\n\n\n
\"\"<\/figure>\n\n\n\n
<\/div>\n\n\n\n

Source: Nexar Deep Learning Traffic Light Challenge. (a) Red light classified as green with 68% confidence after one pixel is changed. (b) Red light classified as green with 95% confidence after one pixel is changed. (c) Red light classified as green with 78% confidence after one pixel is changed.<\/p>\n\n\n\n

These flaws have hugely significant change outcomes in computer-vision applications. A single stray pixel can easily overwhelm even a state-of-the-art vehicle-mounted AI system. Moreover, these adversarial examples are transferable, in the sense that an example misclassified by one network is also misclassified by a network with another architecture, even if it is trained on different data.<\/p>\n\n\n\n

<\/div>\n\n\n\n

Implications of adversarial outcomes resonate across sectors<\/h3>\n\n\n\n

These simple input manipulations can cause large deviations of standard outcomes in autonomous cars. It could cause cars to drive into barriers, jump signals, or drive off road. While my group\u2019s research has shown this to be the case for cars, it can be easily applied to any image-identification use case from optical character recognition (OCR), handwriting interpretation, or natural language processing (NLP) systems.<\/p>\n\n\n\n

A few other applications that can lead to adversarial outcomes are listed below:<\/p>\n\n\n\n

    \n
  1. Natural language processing:<\/li>\n<\/ol>\n\n\n\n

    Today, natural language processing (NLP) software is regularly used to interpret legal documents and contracts.[2] These documents could be purposely designed to deliver flawed interpretation or impede progress. Similarly, this can be applied to language translation, speech-to-text applications, or document processing.<\/p>\n\n\n\n

      \n
    1. Computer vision:<\/li>\n<\/ol>\n\n\n\n

      Our research shows that the modification of just a few pixels can alter the AI object identification process completely \u2013 meaning a traffic light can be perceived as completely different object. Applications range from remote sensing to radar systems and industrial quality control. With computer vision applications being the most successful and most critical application area, flaws exploited here can lead to suboptimal outcomes, economic loss and, in a worst-case scenario, even the loss of human life.<\/p>\n\n\n\n

        \n
      1. Decision-making process:<\/li>\n<\/ol>\n\n\n\n

        Most decision-making systems utilize an array of inputs, from sensor-based or monitoring systems. More complex decisions are usually based on precedent. For example, if different sensors give different results, the critical decision making is based on prior probability outcomes. This means that digital applications such as finance and trading, cybersecurity, and healthcare can easily be intercepted through a critical input network.<\/p>\n\n\n\n

        <\/div>\n\n\n\n

        Building safer systems<\/h3>\n\n\n\n

        Building safer AI systems is the most critical challenge we are faced with today. ÎÚÑ»´«Ã½ Research Institute\u2019s research into Ethics in AI shows that 60% of organizations have attracted legal scrutiny and 22% have faced a customer backlash in the last 2\u20123 years, owing to decisions reached by their AI systems.[3] The consequences for safety-critical systems will be a more drastic erosion of trust.<\/p>\n\n\n\n

        While a considerable research effort has gone into building more explainable, transparent, and robust AI systems, organizations and regulators can also take initial steps to mitigate these challenges:<\/p>\n\n\n\n

          \n
        1. Foster awareness and understanding of possible adversarial exploitation<\/li>\n<\/ol>\n\n\n\n

          AI developers and teams usually have a singular focus on improving confidence rates and overall outcomes. This was the right direction to take when AI was in its infancy, as it helped establish AI as a tool that could be consistently useful to industry. However, with AI now being actively deployed in safety-critical systems, AI developers and teams need to understand the shortcomings of this traditional approach in building models, architectures, and autonomous decision-making systems. A more robust, safety-first approach is required.<\/p>\n\n\n\n

            \n
          1. Develop tool chains to reduce exploitable flaws<\/li>\n<\/ol>\n\n\n\n

            It is well known that testing can detect software flaws but not prove their absence. A widely adopted method that can prove the correctness of software systems is model checking (an automated software technology to verify that given requirements are met for a variety of real-time embedded and safety-critical systems). Model checking techniques are today deployed by organizations such as Microsoft, Intel, and Facebook to check the correctness of their software. Model-checking methods for neural networks are still poorly understood, however; the development has been hampered by a lack of understanding of the theoretical fundamentals of neural networks, alongside their technical complexity. We, at Oxford, are actively developing software tools to verify safety of AI systems, including diagnostic testing for the robustness issue relating to computer-vision applications.[4]<\/p>\n\n\n\n

              \n
            1. Regulators need to build safety guidelines and testing frameworks for safety-critical AI systems<\/li>\n<\/ol>\n\n\n\n

              Regulators also need to put emphasis on developing robustness criteria for safety-critical AI systems and frameworks for checking that such criteria are met. Standardized testing and evaluation frameworks should be created to support the development of safety-critical autonomous systems. These should extend the existing safety regulations found for cars, medical devices, and the workplace.<\/p>\n\n\n\n

                \n
              1. Develop collaborative research into AI systems and their associated transparency, and ethical status<\/li>\n<\/ol>\n\n\n\n

                The current field of adversarial exploitation and model checking for neural networks is still in its infancy and we still have a long way to go to establish a complete understanding of it. Industry-wide collaboration is required to guide the development of appropriate frameworks and standards and to develop new ways of working. Collaboration is required to build open-source tool chains and evaluation methodologies and to govern practices among AI developers and teams.<\/p>\n\n\n\n

                Adversarial AI is still in its infancy in terms of industry understanding. To date, there has been no (detected) concerted effort to exploit these loopholes. However, it is only a matter of time before hostile players work to exploit them. Currently, as well as the potential involvement of hostile actors, these AI systems also show potential flaws relating to a sensitivity to naturally occurring \u201cnoise\u201d in the environment. The reliability, robustness, and possible economic value of AI is directly linked to the trust we have in these systems. <\/em>A significant effort to address these challenges is required to ensure we fulfil the social and economic potential of AI.<\/em><\/p>\n\n\n\n

                <\/div>\n\n\n\n

                [1] Adversarial examples are inputs to machine-learning models designed to cause the model to commit a mistake.<\/p>\n\n\n\n

                [2] Thomson Reuters Blogs, \u201cLegal AI: A beginner\u2019s guide,\u201d February 2017.<\/a><\/p>\n\n\n\n

                [3] ÎÚÑ»´«Ã½ Research Institute, \u201cAI and the ethical conundrum,\u201d September 2020.<\/p>\n\n\n\n

                [4] arXiv, \u201cFeature-Guided Black-Box Safety Testing of Deep Neural Networks,\u201d Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska, In Proc. 24th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS\u201918).<\/p>\n<\/div><\/div><\/div><\/div><\/div><\/section>\n","protected":false},"excerpt":{"rendered":"

                Artificial intelligence (AI) plays a key role in modern society. It drives cars, detects images, understands natural language, and controls<\/p>\n","protected":false},"author":33,"featured_media":512320,"template":"","meta":{"cg_dt_proposed_to":[],"cg_seo_hreflang_relations":"[]","cg_seo_canonical_relation":"","cg_seo_hreflang_x_default_relation":"","cg_dt_approved_content":true,"cg_dt_mandatory_content":false,"cg_dt_notes":"","cg_dg_source_changed":false,"cg_dt_link_disabled":false,"footnotes":"","related_resource_url":"","related_resource_id":0,"related_resource_size":"","related_resource_type":"","cg_author":0,"_yoast_wpseo_primary_theme":75,"primary_term":"Intelligent industry","featured_focal_points":""},"tags":[],"research-and-insight-type":[205],"theme":[75],"brand":[302],"service":[38],"industry":[],"partners":[],"content-group":[],"class_list":["post-504139","research-and-insight","type-research-and-insight","status-publish","has-post-thumbnail","hentry","research-and-insight-type-conversations-for-tomorrow","theme-intelligent-industry","brand-capgemini-research-institute","service-intelligent-industry"],"yoast_head":"\nArticle by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford - ÎÚÑ»´«Ã½ Portugal<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford - ÎÚÑ»´«Ã½ Portugal\" \/>\n<meta property=\"og:description\" content=\"Artificial intelligence (AI) plays a key role in modern society. It drives cars, detects images, understands natural language, and controls\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/\" \/>\n<meta property=\"og:site_name\" content=\"ÎÚÑ»´«Ã½ Portugal\" \/>\n<meta property=\"article:modified_time\" content=\"2025-03-27T07:02:55+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2880\" \/>\n\t<meta property=\"og:image:height\" content=\"1800\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/\",\"url\":\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/\",\"name\":\"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford - ÎÚÑ»´«Ã½ Portugal\",\"isPartOf\":{\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg\",\"datePublished\":\"2021-12-02T09:00:00+00:00\",\"dateModified\":\"2025-03-27T07:02:55+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#primaryimage\",\"url\":\"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg\",\"contentUrl\":\"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg\",\"width\":2880,\"height\":1800},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Research & insights\",\"item\":\"https:\/\/www.capgemini.com\/pt-en\/research-and-insight\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.capgemini.com\/pt-en\/#website\",\"url\":\"https:\/\/www.capgemini.com\/pt-en\/\",\"name\":\"ÎÚÑ»´«Ã½ Portugal\",\"description\":\"ÎÚÑ»´«Ã½\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.capgemini.com\/pt-en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford - ÎÚÑ»´«Ã½ Portugal","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/","og_locale":"en_US","og_type":"article","og_title":"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford - ÎÚÑ»´«Ã½ Portugal","og_description":"Artificial intelligence (AI) plays a key role in modern society. It drives cars, detects images, understands natural language, and controls","og_url":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/","og_site_name":"ÎÚÑ»´«Ã½ Portugal","article_modified_time":"2025-03-27T07:02:55+00:00","og_image":[{"width":2880,"height":1800,"url":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/","url":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/","name":"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford - ÎÚÑ»´«Ã½ Portugal","isPartOf":{"@id":"https:\/\/www.capgemini.com\/pt-en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#primaryimage"},"image":{"@id":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#primaryimage"},"thumbnailUrl":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg","datePublished":"2021-12-02T09:00:00+00:00","dateModified":"2025-03-27T07:02:55+00:00","breadcrumb":{"@id":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#primaryimage","url":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg","contentUrl":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg","width":2880,"height":1800},{"@type":"BreadcrumbList","@id":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Research & insights","item":"https:\/\/www.capgemini.com\/pt-en\/research-and-insight\/"},{"@type":"ListItem","position":2,"name":"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford"}]},{"@type":"WebSite","@id":"https:\/\/www.capgemini.com\/pt-en\/#website","url":"https:\/\/www.capgemini.com\/pt-en\/","name":"ÎÚÑ»´«Ã½ Portugal","description":"ÎÚÑ»´«Ã½","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.capgemini.com\/pt-en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"}]}},"theme_term_info":[{"id":75,"name":"Intelligent industry"}],"industry_term_info":[],"services_term_info":[{"id":38,"name":"Intelligent Industry"}],"partners_term_info":[],"brand_term_info":[{"id":302,"name":"ÎÚÑ»´«Ã½ Research Institute","slug":"capgemini-research-institute"}],"brand_term":[{"id":302,"slug":"capgemini-research-institute"}],"parsely":{"version":"1.1.0","canonical_url":"https:\/\/capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford","url":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/","mainEntityOfPage":{"@type":"WebPage","@id":"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/"},"thumbnailUrl":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg?w=150&h=150&crop=1","image":{"@type":"ImageObject","url":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg"},"articleSection":"Uncategorized","author":[{"@type":"Person","name":"rajeshrangdal"}],"creator":["rajeshrangdal"],"publisher":{"@type":"Organization","name":"ÎÚÑ»´«Ã½ Portugal","logo":""},"keywords":[],"dateCreated":"2021-12-02T09:00:00Z","datePublished":"2021-12-02T09:00:00Z","dateModified":"2025-03-27T07:02:55Z"},"rendered":"<meta name=\"parsely-title\" content=\"Article by Marta Kwiatkowska, Professor of Computing Systems, University of Oxford\" \/>\n<meta name=\"parsely-link\" content=\"https:\/\/www.capgemini.com\/pt-en\/insights\/research-library\/article-by-university-of-oxford\/\" \/>\n<meta name=\"parsely-type\" content=\"post\" \/>\n<meta name=\"parsely-image-url\" content=\"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg?w=150&h=150&crop=1\" \/>\n<meta name=\"parsely-pub-date\" content=\"2021-12-02T09:00:00Z\" \/>\n<meta name=\"parsely-section\" content=\"Uncategorized\" \/>\n<meta name=\"parsely-author\" content=\"rajeshrangdal\" \/>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/capgemini.com\/p.js"},"featured_image_src":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg","featured_image_alt":"","jetpack_sharing_enabled":true,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"ÎÚÑ»´«Ã½ Portugal","distributor_original_site_url":"https:\/\/www.capgemini.com\/pt-en","push-errors":false,"tag_names":[],"featured_image_url":"https:\/\/www.capgemini.com\/pt-en\/wp-content\/uploads\/sites\/42\/2021\/12\/ÎÚÑ»´«Ã½_Conversation-for-tomorrow_Issue3_Marta-Kwiatkowska_1.jpg","_links":{"self":[{"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/research-and-insight\/504139","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/research-and-insight"}],"about":[{"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/types\/research-and-insight"}],"author":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/users\/33"}],"version-history":[{"count":5,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/research-and-insight\/504139\/revisions"}],"predecessor-version":[{"id":530090,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/research-and-insight\/504139\/revisions\/530090"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/media\/512320"}],"wp:attachment":[{"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/media?parent=504139"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/tags?post=504139"},{"taxonomy":"research-and-insight-type","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/research-and-insight-type?post=504139"},{"taxonomy":"theme","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/theme?post=504139"},{"taxonomy":"brand","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/brand?post=504139"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/service?post=504139"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/industry?post=504139"},{"taxonomy":"partners","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/partners?post=504139"},{"taxonomy":"content-group","embeddable":true,"href":"https:\/\/www.capgemini.com\/pt-en\/wp-json\/wp\/v2\/content-group?post=504139"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}