-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathindex.html
148 lines (119 loc) · 8.31 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width = device-width, initial-scale = 1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title> REV website </title>
<link rel="stylesheet" type="text/css" href="reset.css" />
<link rel="stylesheet" type="text/css" href="main.css" />
<link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:300,400,600,700&display=swap&subset=latin-ext" rel="stylesheet">
</head>
<body>
<!------------------------------| NAVIGATION |---------------------------------------->
<nav>
<a href="index.html"><img class="AXA_logo nav_item" src="img/axa-logo.png" alt="AXA logo" style="display: none"></a>
<ul>
<li class="nav_item"><a href="index.html">RESEARCH</a></li>
<li class="nav_item"><a href="team.html">OUR TEAM</a></li>
</ul>
</nav>
<!------------------------------| HEADER |---------------------------------------->
<header class="intro">
<h1><object height="70%" width="70%" data="img/logo-rev-research.svg" type="image/svg+xml"></object></h1>
<div>
The <b>AXA/GO/REV/Research</b> carries out research on the relationship between humans and Machine Learning (ML) and Artificial Intelligence (AI) to ensure our society only benefits from them.
</div>
<a href="#first_subject"><object height="50rem" width="50rem" data="img/arrow.svg" type="image/svg+xml"></object>
</a>
</header>
<!------------------------------| SUJETS DE L'EQUIPE |---------------------------------------->
<section class="paper_list">
<!--Subject-->
<div id="first_subject" class="show">
<img src="img/subjects/behavioral_design.jpg">
<div>
<h2>Human+AI interactions</h2>
<p>Artificial intelligence (AI) became pervasive and is blending in with our lives and habits without us noticing it. Although AI is very powerful, companies don’t know what would be its impacts on society, and people don’t know how to interact with it. The purpose of this research is to understand the user experience principles of AI-infused products and define responsible AI guidelines for all its practitioners.</p>
<a href="human_ai_interaction.html">See the publications & meet the team</a>
</div>
</div>
<!--Subject-->
<div class="show">
<img src="img/subjects/fairness.jpg">
<div>
<h2>Fairness</h2>
<p>Machine learning models identify pattern in data. Their major strength is the capability to find and discriminate classes in training data, and to use those insights to make predictions for new, unseen data sets. However, there is growing concern about their potential to reproduce discrimination against a particular group of people based on sensitive characteristics such as gender, race, religion, or other. In particular, algorithms trained on biased data are prone to learn, perpetuate or even reinforce these biases. The purpose of our research is to measure unwanted bias and to find solutions to mitigate it.</p>
<a href="fairness.html">See the publications & meet the team</a>
</div>
</div>
<!--Subject-->
<div class="show" id="research">
<img src="img/subjects/interpretability.jpg">
<div>
<h2>Interpretability</h2>
<p>The widespread usage of Machine Learning and Artificial Intelligence in our society requires a high level of transparency to ensure that practitioners and users alike, are aware of how and why systems behave the way they do. The team leads a research effort to improve Machine Learning Interpretability with the aims of: (I) avoiding black-box predictions and (II) improving the quality of Machine Learning models.</p>
<a href="./interpretability">See the publications & meet the team</a>
</div>
</div>
<!--Subject-->
<div class="show">
<img src="img/subjects/mobility.jpg">
<div>
<h2>Computer Vision</h2>
<p>State-of-the-art Computer Vision will improve our internal processes, thus allowing a faster and more accurate action. Be it document understanding for claims processing, flood detection for real-time customer support, or object detection for improved risk understanding, these algorithms are transforming the way we work.</p>
<a href="comp-vis.html">See the publications & meet the team</a>
</div>
</div>
<!--Subject-->
<div class="show">
<img src="img/subjects/quantum.jpg">
<div>
<h2>Quantum</h2>
<p>Quantum Computing (QC) is nascent technology that whilst being very hyped because of its complexity some aspects can be confusing. This is because there are many different kinds of technologies behind QC and even more possible applications, behind each one of these are countless organisations trying to label their QC technology as the best. We collect all available information and sort through it to find which parts are viable and scalable so an applicable QC framework can be built to use in the future.</p>
<a href="quantum.html">See the publications & meet the team</a>
</div>
</div>
<!--Subject-->
<div class="show">
<img src="img/subjects/regulation.jpg">
<div>
<h2>Regulation</h2>
<p>Today the question of how the development of technology — and AI in particular — should be governed remains at the forefront of public debate. International organizations, national and local governments but also a growing number of firms already designing and using artificial intelligence (AI) recognize the need for guiding principles and a model for governance. The team works on analyzing the different principles shaping and guiding the ethical development of AI uses.</p>
<a href="regulation.html">See the publications & meet the team</a>
</div>
</div>
<!--Subject-->
<div class="show">
<img src="img/subjects/mobility.jpg">
<div>
<h2>Mobility/Safety/IoT</h2>
<p>Urban mobility has already a considerable impact on our quality of life and safety. On the other hand, novel approaches as urban sensing, multi-modal and intelligent transport systems, and smart cities, are offering a more sustainable mobility and new services.
Our team emphasizes that the right mix among machine learning, IoT, graph theory, and networking will lead to a better safety and mobility orchestration for public transportations, private drivers, autonomous and connected vehicles, and pedestrians.</p>
<a href="mobility.html">See the publications & meet the team</a>
</div>
</div>
<!--Subject-->
<div class="show">
<img src="img/subjects/robustness.jpg">
<div>
<h2>Robustness</h2>
<p>Machine learning (ML) algorithms are now used to tackle important problems such as medical diagnosis. However, they are still not robust to modified inputs such as adversarial examples and are often overconfident about their own performance. This is an important issue as the confidence associated with a prediction is often used to decide whether the prediction can be trusted or not. Thus, our research aims at improving ML robustness and at finding ways to better assess the confidence provided in order to reliably detect misclassification.</p>
<a href="robustness.html">See the publications & meet the team</a>
</div>
</div>
</section>
<!------------------------------| FOOTER |---------------------------------------->
<footer>
<div>
<a href="#">
<h3>Follow Us</h3>
</a>
</div>
<div>
<h3>About Us</h3>
<p>We are a global research team working for AXA. We strongly believe that research will make the world a better place for tommorow.</p>
<img src="img/paris-illustration.png">
</div>
</footer>
</body></html>