-
Notifications
You must be signed in to change notification settings - Fork 0
/
temp.txt
823 lines (412 loc) · 114 KB
/
temp.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
2 2 0 2
r p A 5 1 ] C H . s c [
1 v 6 6 6 7 0 . 4 0 2 2 : v i X r a
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems
JEBA REZWANA and MARY LOU MAHER, University of North Carolina at Charlotte, USA
Human-AI co-creativity involves both humans and AI collaborating on a shared creative product as partners. In a creative collaboration,
interaction dynamics, such as turn-taking, contribution type, and communication, are the driving forces of the co-creative process. Therefore the interaction model is a critical and essential component for effective co-creative systems. There is relatively little research about interaction design in the co-creativity field, which is reflected in a lack of focus on interaction design in many existing co-creative systems. The primary focus of co-creativity research has been on the abilities of the AI. This paper focuses on the importance of interaction design in co-creative systems with the development of the Co-Creative Framework for Interaction design (COFI) that describes the broad scope of possibilities for interaction design in co-creative systems. Researchers can use COFI for modeling interaction in co-creative systems by exploring alternatives in this design space of interaction. COFI can also be beneficial while investigating and interpreting the interaction design of existing co-creative systems. We coded a dataset of existing 92 co-creative systems using COFI and analyzed the data to show how COFI provides a basis to categorize the interaction models of existing co-creative systems. We identify opportunities to shift the focus of interaction models in co-creativity to enable more communication between the user and AI leading to human-AI partnerships.
CCS Concepts: • Human-centered computing → Interaction design process and methods; Collaborative interaction.
Additional Key Words and Phrases: Human-AI Co-Creativity, Co-Creativity, Interaction Design, Framework
ACM Reference Format: Jeba Rezwana and Mary Lou Maher. 2022. Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems. ACM Trans. Comput.-Hum. Interact. 1, 1, Article 1 (January 2022), 27 pages.
1 INTRODUCTION
Computational creativity is an interdisciplinary field that applies artificial intelligence to develop computational systems
capable of producing creative artifacts, ideas and performances [33]. Research in computational creativity has lead to different types of creative systems that can be categorized based on their purposes: systems that generate novel and valuable creative products, systems that support human creativity, and systems that collaborate with the user on a shared creative product combining the creative ability of both the user and the AI [39]. Davis introduces the term human-computer co-creativity, where humans and computers can collaborate in a creative process as colleagues [42]. In human-computer co-creativity, both humans and AI agents are viewed as one system through which creativity emerges. The creativity that emerges from a collaboration is different from creativity emerging from an individual as creative collaboration involves interaction among collaborators and the shared creative product is more creative than each individual could achieve alone [137]. Stephen Sonnenburg demonstrated that communication is the driving force of collaborative creativity [145]. Interaction is a basic and essential component of co-creative systems as both
Authors’ address: Jeba Rezwana, jrezwana@uncc.edu; Mary Lou Maher, m.maher@uncc.edu, University of North Carolina at Charlotte, USA, .
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
© 2022 Association for Computing Machinery.
1
2
2
Rezwana and Maher
Rezwana and Maher
the human and the AI actively participate and interact in the co-creation, unlike autonomous creative systems that
generate creative artifacts alone and creativity support tools that support human creativity.
Designing and evaluating co-creative systems has many challenges due to the open-ended and improvisational
nature of the interaction between the human and the AI agent [41, 78]. Humans utilize many different creative strategies and reasoning processes throughout the creative process, and ideas and the creative product develop dynamically through time. This continual progression of ideas requires adaptability on the agent’s part. Additionally, it is not always clear how the co-creative AI should contribute and interact during the course of the co-creative process. For example, sometimes the human may want to lead and have the AI assist with some tasks, whereas other times the human may want the AI to lead to help find inspiration or to work independently. Understanding the mechanics of co-creation is still very much open questions in the young field of human-computer co-creativity. Bown asserted that the success of a creative system’s collaborative role should be further investigated as interaction plays a key role in the creative process of co-creative systems [17]. AI ability alone does not ensure a positive collaborative experience of users with the AI [100] and interaction is more critical than algorithms where interaction with the users is essential [152]. In this paper we focus on the interaction design space as an essential aspect of effective co-creative systems.
Interaction design is the creation of a dialogue between users and the system [87]. Recently, interaction design in
co-creative systems is being addressed as a significant aspect of computational creativity. Kantosalo et al. said that interaction design, specifically, interaction modality should be the ground zero for designing co-creative systems [77]. However, the interaction designs of many existing co-creative systems provide only one-way interaction, where humans can interact with the AI but the system is not designed for the AI to communicate back to humans. For example, Collabdraw [54] is a co-creative sketching environment where users draw with an AI. The user interface includes only one button that users click to submit their artwork and indicate that their turn is complete. Other than the button, there is no way for the user to communicate with the AI and vice versa to provide information, suggestions, or feedback. Although the AI algorithm is capable of providing intriguing contributions to the creative process, the interaction design is inadequate for collaboration between human and AI. Another example, Image to Image [68] is a co-creative system that converts a line drawing of a particular object from the user into a photo-realistic image. The user interface has only one button that users click to tell the AI to convert the drawing. Interaction design can provide more than the transfer of instructions from a user to an AI agent to generate a creative artifact and can lead to a more engaging user experience. A recent study showed increased user satisfaction with text-based instructions from the AI rather than button-based instructions in a co-creation [118]. A starting point to investigate interaction models is the study of collaboration among humans [39]. Understanding the factors in human collaboration can build the foundation for the development of human-AI collaboration in co-creative systems [107]. Interaction models developed for computer supported collaborative work is an important source for identifying interaction models related to co-creative systems.
In this paper, we present Co-Creative Framework for Interaction design (COFI) that describes interaction components
as a space of possibilities for interaction design in co-creative systems. These interaction components represent various aspects of a co-creation, such as participation style, contribution type, and communication between humans and the AI. COFI is informed by the literature on human collaboration, CSCW, computational creativity, and human-computer co-creativity. We adopted interaction components based on a literature review and adapted the components to concepts relevant to co-creativity. COFI can be used as a guide when designing the interaction models in co-creative systems. COFI can also be beneficial for investigating and interpreting the interaction design of existing co-creative systems. We coded and analyzed the interaction models of a dataset of 92 co-creative systems using COFI to evaluate the value and analytical power of the framework. Three distinct interaction models for co-creative systems emerged from this
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems3
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative System3
analysis: generative pleasing AI agents that follow along with the user, improvisational AI agents that work alongside
users on a shared product spontaneously, and advisory AI agents that both generate and evaluate the creative product. The analysis reveals that the co-creative systems in this dataset lack communication channels between the user and AI agent. Finally, this paper discusses the limitations in the existing interaction models in co-creative systems, potential areas for further development, and the importance of extending the scope of human-AI communication in co-creative systems.
2 RELATED WORK
2.1 Co-creative Systems
Creativity is defined as the exploration and production of novel and useful ideas [47, 70, 72]. Wiggins defined creative
systems as "A collection of processes, natural or automatic, which are capable of achieving or simulating behavior which in humans would be deemed creative [156]." Davis et al. discussed the three main categories of creative systems based on their working processes and purposes [39]: standalone generative systems, creativity support tools, and co-creative systems. Standalone generative systems refer to fully autonomous intelligent systems that work independently without any interaction with humans in the creative process. Creative systems that support the user’s creativity without contributing to the creative process are considered creativity support tools (CST). In co-creative systems, humans and computer both contribute as creative colleagues in the creative process[42]. Co-creative systems originated from the concept of combining standalone generative systems with creativity support tools as computers and humans both take the initiative in the creative process and interact as co-creators [76]. Mixed initiative creative systems are often used as a substitute term for co-creative systems in the literature [161].
In a co-creative system, interaction between the human and AI agent make the creative process complex and emergent. Maher explores issues related to who is being creative when humans and AI collaborate in a co-creative system [106]. Antonios Liapis et al. argued that when creativity emerges from human-computer interaction, it cannot be credited either to the human or to the computer alone, and surpasses both contributors’ original intentions as novel ideas arise in the process [95]. Designing interaction in co-creative systems has unique challenges due to the spontaneity of the interaction between the human and the AI [41]. A co-creative AI agent needs continual adjustment and adaptation to cope with human strategies. A good starting point to investigate questions about modeling an effective interaction design for co-creative systems can be studying creative collaboration in humans [39]. Mamykina et al. argued that by understanding the factors of human collaborative creativity, methods can be devised to build the foundation for the development of computer-based systems that can augment or enhance collaborative creativity [107].
2.2 Interaction Design in Co-creative Systems
Regarding interaction design in interactive artifacts, Fallman stated: "interaction design takes a holistic view of the
relationship between designed artifacts, those that are exposed to these artifacts, and the socio-cultural context in which the meeting takes place" [53]. In the field of co-creativity, interaction design includes various parts and pieces of the interaction dynamics between the human and the AI, for example - participation style, communication between collaborators, and contribution type. Now the question is how researchers and designers can explore the possible spaces of interaction design in co-creative systems. For instance, turn-taking is the ability for agents to lead or follow in the process of interaction [158]. While designing a co-creative system, should the designer consider turn-taking or concurrent participation style? Turn-taking models work well in many co-creative systems but may not fit well for all co-creative systems. Lauren and Magerko investigated whether the user experience is improved with a turn-taking model applied to Lumin AI, a co-creative dance partner, through an empirical study [158]. However, their results showed
4
4
Rezwana and Maher
negative user experience with a turn-taking model compared to a non-turn taking model. The negative user experience
resulted from the dislike for the AI agent to take the lead.
Bown argued that the most practiced form of evaluating artificial creative systems is mostly theoretical and is not
empirically well-grounded and suggested interaction design as a way to ground empirical evaluations of computational creativity [16]. Yee-King and d’Inverno also argued for a stronger focus on the user experiences of creative systems, suggesting a need for further integration of interaction design practice into co-creativity research [162]. There is a lack of a holistic framework for interaction design in co-creative systems. A framework for interaction design is necessary to explain and explore the possible interaction spaces and compare and evaluate the interaction design of existing co-creative systems for improving the practice of interaction modeling in co-creative systems.
There are recent developments in frameworks and strategies for interaction in co-creative systems. Kantosalo et al.
proposed a framework to describe three aspects of interaction, interaction modalities, interaction style and interaction strategies, in co-creative systems [77]. They analyzed nine co-creative systems with their framework to compare different systems’ creativity approaches even if they are within the same creative domain [77]. Bown and Brown identified three interaction strategies - operation-based interaction, request-based interaction and ambient interaction in metacreation, the automation of creative tasks with machines [18]. Bown et al. explored the role of dialogue between the human and the user in co-creation and argued that both linguistic and non-linguistic dialogues of concepts and artifacts are essential to maintain the quality of co-creation [19]. Guzdial and Riedl proposed an interaction framework for turn-based co-creative AI agents to better understand the space of possible designs of co-creative systems [62]. Their framework is limited to turn-based co-creative agents and has a focus on contributions and turn-taking. In this paper we present COFI, a description of a design space of possibilities for interaction in co-creative systems that includes and extends these existing frameworks and strategies.
2.3 Creative Collaboration among Humans
Sawyer asserted that the creativity that emerges from collaboration is different from the creativity emerging from an
individual where interaction among the group is a vital component of creativity [137]. He investigated the process of creativity when emerging from a group by observing and analyzing improvisational theater performances by a theater group [137] and argued that the shared product of collaborative creativity is more creative than each individual alone could achieve. Sonnenburg introduced a theoretical model for creative collaboration, and this model presents communication among the group as the driving force of collaborative creativity [145]. Interaction among the individuals in a collaboration makes the process emergent and complex. For investigating human collaboration, many researchers stressed the importance of understanding the process of interaction. Fantasia et al. proposed an embodied approach of collaboration which considers collaboration as a property and intrinsic part of interaction processes [55]. They claimed that interaction dynamics help in understanding and fostering our knowledge of different ways of engaging with others and argued that it is crucial to investigate the interaction context, the environment, and how collaborators make sense of the whole process for gaining more knowledge and understanding more about collaboration. In COFI, we include components that address the interaction we observe in human to human collaboration as possibilities for human-AI co-creativity.
Computer supported cooperative work (CSCW) is computer assisted coordinated activity carried out by a group of
collaborating individuals [9]. K Schmidt defined CSCW as an endeavor to understand the nature and characteristics of collaborative work to design adequate computer-based technologies [139]. A foundation of CSCW is sense-making and understanding the nature of collaborative work for designing adequate computer based technology to support human
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems5
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative SystemS
collaboration. CSCW systems are designed to improve group communication while alleviating negative interactions
that reduce collaboration quality [75]. For building effective CSCW systems for collaborative creative work, many CSCW researchers investigated creative collaboration among humans to understand the mechanics of collaboration. For this reason, the design space of CSCW is relevant for interaction design in co-creative systems.
2.4 Sense-making in Collaboration
Sense-making is motivated by a continual urge to understand connections among people, places, and events to anticipate
their trajectories [86]. Russel et al. discussed the processes involved in a sense-making: searching for representations, instantiate representations by encoding the representations, and utilizing the encoding in task-specific information [133]. Davis argued that participatory sense-making is useful to analyze, understand and model creative collaboration [39]. De Jaegher and Di Paolo also proposed participatory sense-making as a starting point of understanding social interaction [44]. To understand participatory sense-making, the definition of sense-making from cognitive theory is crucial. Sense-making is the way cognitive agents meaningfully connect with their world, based on their needs and goals as self-organizing, self-maintaining, embodied agents [43]. Introducing multiple agents in the environment makes the dynamics of sense-making more complex and emergent as each agent is interacting with the environment as well as with each other. Participatory sense-making evolves from this complex, mutually interactive process [39]. Participatory sense-making occurs where "A co-regulated coupling exists between at least two autonomous agents where the regulation itself is aimed at the aspects of the coupling itself so that the domain of relational dynamics constitutes an emergent autonomous organization without destroying the autonomy of the agents involved." [44]. In this quote, De Jaegher and Di Paolo outlines the process of participatory sense-making where meaning-making of relational interaction dynamics such as the rhythm of turn-taking, manner of action, interaction style, etc is necessary [41].
Interaction with the Shared Product Shared Product _ AL Interaction between Collaborators
Fig. 1. Interactional Sense-making in a Co-creation.
To understand interaction dynamics in an open-ended improvisational collaboration, Kellas and Trees present a model
of interactional sense-making [81]. They describe two types of interaction in the sense-making process: interaction between collaborators and interaction with the shared product (Figure 1). We adapt and extend this model for COFI to ground our space of possibilities for interaction design on the concept of interactional sense-making. Interaction with the shared product, in the context of a co-creative system, describes the ways in which the co-creator can sense, contribute, and edit content of the emerging creative product. Interaction between collaborators explains how the interaction between the co-creators is unfolding through time which includes turn-taking, timing of initiative, communication etc. Participatory sense-making occurs when there is a mutual co-regulation of these two interactional sense-making
6
6
Rezwana and Maher
processes between the co-creators []. For example, when both participants are adapting their responses based on each
other’s contribution while maintaining an engaging interaction dynamic, participatory sense-making occurs.
3 CO-CREATIVE FRAMEWORK FOR INTERACTION DESIGN (COFI)
We develop and present Co-Creative Framework for Interaction Design (COFI) as a space of possibilities for interaction
design in co-creative systems. COFI also provides a framework for analyzing the interaction design trends of existing co-creative systems. This framework describes various aspects involved in the interaction between the human and the AI. COFI is informed by research on human collaboration, CSCW, computational creativity, and human-computer co-creativity.
The primary categories of COFI are based on two types of interactional sense-making of collaboration as described by
Kellas and Trees (Figure 2) [81]: interaction between collaborators and interaction with the shared product. Interaction with the shared product, in the context of co-creative systems, describes interaction aspects related to the creation of the creative content. Interaction between collaborators explains how the interaction between the human and the AI is unfolding through time which includes turn-taking, timing of initiative, communication, etc. Thus, COFI characterizes relational interaction dynamics between the collaborators (human and AI) as well as functional aspects of interacting with the shared creative product. Kellas and Trees’ framework was used for explaining and evaluating the interaction dynamics in human creative collaboration in joint storytelling. Understanding collaborative creativity among humans can be the basis for designing effective co-creative systems where the AI agent acts as a creative partner.
Each of the two categories of interaction is further divided into two subcategories. Interaction between collaborators
is divided into collaboration style and communication style. On the other hand, interaction with the shared product is divided into the creative process and creative product. CSCW literature discusses collaboration mechanics among the collaborators to make effective CSCW systems. Many frameworks about groupware and CSCW systems discuss and emphasize both collaboration components and communication components among collaborators. For example, Baker et al. proposed an evaluation technique based on collaboration mechanics for groupware and emphasized both coordination and communication components in a collaboration [11]. Creativity literature focuses more on creativity emergence, which includes creative processes and the creative product. For example, Rhodes’s famous 4P, which is one of the most acknowledged model, includes creative process and product [130]. Therefore, in COFI, the literature regarding human collaboration and CSCW literature informs the category ’interaction between the collaborators’, while the creativity and co-creativity literature provides descriptions of the ’interaction with the shared product’. In human-AI co-creativity, the focus should be on both creativity and collaboration. As a result, both the CSCW and creativity literature provide the basis for defining the interaction components of COFI under the four subcategories.
We performed a literature review to identify the components of COFI. We identified a list of search databases for
relevant academic publications: ACM Library, arXiv, Elsevier, Springer, and ScienceDirect, and google scholar. We used keywords based on the 4 Cs in COFI: Collaboration style, Communication style, Creative process, Creative product. The total list of keywords are: ’human collaboration mechanics,’ ’creative collaboration among humans,’ ’communication in collaboration,’ ’cooperation mechanics,’ ’Interaction in joint action,’ ’groupware communication,’ ’interaction design in computational creativity,’ ’interaction in co-creativity,’ ’creative process,’ ’group interaction in computational creativity,’ ’interaction in human-computer co-creation’. We considered documents published from 1990 until 2021. We did not include papers that are a tutorial or poster, papers that are not in English, papers that by title or abstract are outside the scope of the research, and papers that do not describe the collaboration mechanics or group interaction. We included papers describing strategies, mechanisms and components of interaction in a natural collaboration, computer-mediated
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems7
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems
Interaction in Co-creative Systems (a) Interaction ; between interaction > with the Collaborators Shared Product_/ Creative Product Creative Process Communication Style Collaboration Style Generate Evaluate Mimic Non- Parallel Contribution Similarity Turn-taking AL to Human Communication Human to AL Intentional Communication Contribution Type Create High Low Speech | Visual New Refine ¥ Direct Embodied Same Task Manipulation Divided Embodied \, Haptic Extend Transform Text Conseq cate Embodied Biometric Facial Expression
Fig. 2. Co-Creative Framework for Interaction Design (COFI): On the left (a) Components of Interaction between the collaborators, On the right (b) Components of Interaction with the Shared Product.
collaboration and human-AI collaboration. COFI was developed in an iterative process of adding, merging, and removing
components based on the interaction components defined in the literature. We refer to the specific publications that contributed to each component of COFI in the sections below: for each interaction component the first paragraph defines the component, and the second paragraph references the relevant publications that provided the basis for that component.
3.1 Interaction between Collaborators (Human and AI)
This section presents components related to the relational interaction dynamics between the human and the AI as
co-creators. As shown in Figure 1(a), interaction between collaborators is divided into two subcategories which are collaboration style and communication style.
3.1.1 Collaboration Style. Collaboration style is the manner of working together in a co-creation. In COFI, collaboration style comprises participation style, task distribution, timing of initiative and mimicry as interaction components. The following subsections describe each interaction component in this category.
Participation Style: Participation style in COFI refers to whether the collaborators can participate and contribute simultaneously, or one collaborator has to wait until the partner finishes a turn. Therefore, participation style in COFI is
8
8
Rezwana and Maher
categorized as parallel and turn-taking. For example, in a human-AI drawing co-creation, collaborators can take turns
to contribute to the final drawing or they can draw simultaneously.
Participation style in COFI is based on the categorization of interpersonal interaction into two types: concurrent
interaction and turn-based interaction [97]. In concurrent interaction, continuous parallel participation from the collaborators occurs and in turn-based interaction, participants take turns in contributing. In a parallel participation style, both collaborators can contribute and interact simultaneously [124]. In a turn-taking setting, simultaneous contribution can not occur [124]. In CSCW research, there is a concept for interaction referred to as synchronous and asynchronous. Synchronous interaction requires the real time interaction where the presence of all collaborators is required. Whereas asynchronous cooperation does not require simultaneous interaction of all collaborators [22, 128, 132]. In CSCW, the distinction between synchronous and asynchronous interaction is information exchange in terms of time. In COFI, participation style describes the way collaborators participate when all are present at the same time.
Task Distribution: Task distribution refers to the distribution of tasks among the collaborators in a co-creative system. In COFI, there are two types of task distribution, same task and task divided. When it is same task, there is no division of tasks between collaborators and all the collaborators take part in the same task. For example, in a human-AI co-creative drawing, both co-creators do the same task, i.e. generating the drawing. In a task-divided distribution, the main task is divided into specific sub-tasks and the sub-tasks are distributed among the collaborators. For example, in co-creative poetry, the user can define the conceptual space for the poetry and generate a poem while the AI agent can evaluate the poetry.
Cahan and Fewell asserted that division of task is a key factor in the success of social groups [23]. According to
Fischer and Mandl, task division should be addressed for co-ordination in a computer-mediated collaboration [56]. This component of COFI emerged from discussions of the two interaction modes presented by Kantosalo and Toivonen: alternating co-creativity and task divided co-creativity [79]. In alternating co-creativity, each party contributes to the shared artifact while doing the same task by taking turns. Kantosalo and Toivonen emphasized the turn-taking in alternating interaction mode. In COFI, we renamed alternating co-creativity to be same task as we want to emphasize the task distribution. Task divided in COFI is the same term used in Kantosalo and Toivenen [79].
Timing of Initiative: In a co-creative setting, the timing of collaborators’ initiative can be scheduled beforehand, or it can be spontaneous. If the timing of the initiative is planned or fixed in advance, in COFI it will be addressed as planned. If both agents initiate their contribution without any prior plan or fixed rules, it will be addressed as spontaneous. Timing of the initiative should be chosen based on the motivation behind designing a co-creative system. Spontaneous timing is suitable for increased emergent results, whereas planned timing is more suitable for systems where users want inspiration or help in a specific way for a particular aspect of the creative process.
Salvador et al. discussed timing of initiative in their framework for evaluating groupware for supporting collaboration
[134]. They defined two types of timing of initiative: spontaneous initiatives, where participants take initiatives spontaneously and pre-planned initiatives, where group interactions are scheduled in advance. Alam et. al divided interaction among groups into planned and impromptu [5]. For COFI, we merged these ways of describing the timing of initiative into spontaneous and planned.
Mimicry: COFI includes mimicry as a subcategory of collaboration style which is used in co-creative systems as an intentional strategy for collaboration. When mimicry is a strategy for the AI contribution, the co-creative AI mimics the human user.
Drawing Apprentice [40] is a co-creative web-based drawing system that collaborates with users in real-time abstract
drawing while mimicking users. The authors demonstrated with their findings that even if the Drawing Apprentice
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems9
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative System9
mimics the user in the creative process, the system engaged users in the creative process that resulted in generating
novel ideas. An example of a non-mimic co-creative system is Viewpoints AI. Viewpoints AI is a co-creative system where a human can engage in collaborative dance movement as the system reads and interprets the movement for responding with an improvised movement [69].
3.1.2 Communication Style. In COFI, communication style refers to the ways humans and AI can communicate. Communication is an essential component in any collaboration for the co-regulation between the collaborators and helps the AI agent make decisions in a creative process [19]. Communication is critical for achieving understanding and coordination between collaborators. A significant challenge in human-AI collaboration is the development of common ground for communication between humans and machines [37]. Collaborators communicate in different ways in a co-creation such as, communication through the shared product and contributions, and communication through different communication channels or modalities. In co-creative systems, collaborators contribute to the shared product through the creative process and sense-making of each others’ contributions during the process and act accordingly. Communicating through the shared product is a prerequisite in a co-creation or any collaborative system [18]. Hence, COFI does not include interaction through the shared product under communication style. In COFI, communication style includes different channels or modalities designed to convey intentional and unintentional information between users and the AI. Human to AI communication channels carry information from users to the AI. On the other hand, AI to human communication channels carry information from the AI to users.
Human to AI Intentional Communication: Human to AI intentional communication channels represent the possible ways a human agent can intentionally and purposefully communicate to the AI agent to provide feedback and convey important information. In COFI, human to AI communication channel includes direct manipulation, voice, text and embodied communication. The human agent can directly manipulate the co-creative system by clicking buttons for giving instructions, feedback, or input. It can also provide user preferences by selecting from AI provided options. Using the whole body or gestures for communicating with the computer will be referred to as embodied. Voice and text can be also used as intentional communication channels from human to AI.
Gutwin and Greenburg proposed a framework that discusses the mechanics of collaboration for groupware [61].
Their framework includes seven major elements and one of them is explicit or intentional communication. Bard defined intentional communication as the ability to coordinate behavior involving agents [12]. Brink argued that the primary goal of intentional communication is to establish joint attention [20]. In the field of human-computer interaction, the communication channel between humans and computers is described as a modality. The modalities for intentional communication from human to AI include direct manipulation, embodied/gesture, text, and voice [117].
Human to AI Consequential Communication: In COFI, human to AI consequential communication channels represent the ways the human user unintentionally or unconsciously gives off information to the AI agent. In other words, this channel represents the ways a co-creative AI agent can track and collect unintentional or consequential information from the human user such as eye tracking, facial expression tracking, biometric data tracking and embodied movements. AI agents can track and collect various consequential details from the human to perceive user preference, user agency and engagement. For example, a posture or facial expression can indicate boredom or lack of interest.
Gutwin and Greenburg reported consequential or unintentional communication as a major element of collabora-
tion mechanics, in addition to intentional communication [61]. Collaborators pick up important information that is unintentionally "given off” by others, which is considered as consequential communication in a human collaboration. Unintentional communication, such as embodied communication, gaze, biometric measurement and facial expression
10
10
Rezwana and Maher
are consequential communication [61]. Revealing the internal state of an individual is termed ’Nonverbal leakage’
by Ekman and Freisen [52]. Mutlu et al. argued that in a human-AI interaction, unintentional cues have a significant impact on user experience [114].
AI to Human Communication: AI to human communication represents the channels through which AI can communicate to humans. Humans expect feedback, critique and evaluation of our contribution from collaborators in teamwork. If the AI agent could communicate their status, opinion, critique and feedback for a specific contribution, it would make the co-creation more balanced as the computational agent will be perceived as an intelligent entity and a co-equal creative partner rather than a mere tool. This communication involves intentional information from the AI to human. Because the interaction abilities of a co-creative AI agent are programmed, all of the communication from the AI is intentional. However, one may ask, can AI do anything unintentional or unconscious beyond the programmed interaction? A co-creative AI can have a body and can make a facial expression of boredom. However, can we call it unintentional or it is also an intentional information designed to be similar a human’s consequential communication? It can be an interesting question to ask if consequential communication from the AI to the user is even possible to design. Mutlu et al. investigated the impact of ’nonverbal leakage’ in robots on human collaborators [114], however the leakage was designed intentionally as part of the interaction design.
In a co-creative setting, the modalities for AI initiated communication can include text, voice, visuals (icons, image,
animation), haptic and embodied communication [117]. There are some communication channels that work for both human to AI and AI to human communication, such as text, voice, and embodied communication. These communication channels are under both categories to identify the possibilities based on the direction of information flow.
3.2 Interaction with the Shared Product
Interaction components related to the shared creative product in a co-creative setting are discussed in this section and
illustrated in Figure 1(b). Interaction with the shared product is divided into two subcategories, creative contribution to the product and creative process.
3.2.1 Creative Process. Creative process characterizes the sequence of actions that lead to a novel and creative production [101]. In COFI, there are three types of creative processes that describe the interaction with the shared product: generate, evaluate, and define. A co-creative AI can play the role of a generator, evaluator or a definer depending on the creative process. In the generation creative process, the co-creative AI generates creative ideas or artifacts. For example, a co-creative AI can generate a poem along with the user or produce music with users. Co-creative AI agents evaluate the creative contributions made by the user in a creative evaluation process. An example of creative evaluation will be analyzing and assessing a creative story generated by a user. And in a creative definition process, the AI agent will define the creative concept or explore different creative concepts along with the user. For example, a co-creative agent can define the attributes of a fictional character before a writer starts to write about the character.
The basis of this categorization is the work of Kantosalo et al. that defines the roles of the AI as generator, evaluator,
and concept definer [79]. COFI adopts the categorization of Kantosalo et al. as a basis for understanding the range of potential creative processes: The generator generates artifacts in a specific conceptual description, the evaluator evaluates these concepts, and the concept definer defines the conceptual space [79]. In the recent work of Kantosalo and Jordanous, they compared their defined roles with the apprentice framework of Negrete-Yankelevich’s and Morales-Zaragoza, where the roles are generator, apprentice and master [115].
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems11
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systerhs
3.2.2 Creative Product. The creative product is the idea or concept that is being created. Creative product has two interaction components, contribution type and contribution similarity. We identified these specific components as we focused on various aspects of contribution making to the shared product as meaning emerges through the contributions in a collaboration. These components are identified from the literature and discussed in the following subsections.
Contribution Type: In a co-creation, an individual can contribute in different ways to the shared product. Co- creators can generate new elements for the shared product, extend the existing contribution, and modify or refine the existing contribution. How a co-creator is contributing depends on their interaction with the shared product and their interpretation of the interaction. The primary contribution types according to COFI are: ’create new’, ’extend’, ’transform’ and ’refine’. ’Extend’ refers to extending or adding on to a previous contribution made by any of the collaborators. Generating something new or creating new objects is represented by ’create new’, whereas ’transform’ conveys turning a contribution into something totally different. ’Refine’ is evaluating and correcting a contribution with similar type of contribution. For example, in a co-creative drawing, drawing a tree will be considered ’create new’. Extend is when the collaborator adds a branch to the tree or extends the roots of the tree. Turning a tree branch into something else, such as a flower, will be considered a ’transformation’, different from ’create new’ as it is performed on a previous contribution to turn it into a new object. ’Refine’ is when the collaborator polishes the branch of the tree to give more detail.
Contribution types are adopted and adapted from Boden’s categories of computational creativity based on different
types of contribution: combinatorial, exploratory, and transformational [15]. Combinatorial creativity involves novel (improbable) combinations of similar ideas to the existing ideas. We adapted ’expand’ and ’refine’ from combinatorial creativity as ’expand’ is extending the existing contribution and ’refine’ is about correcting or emphasizing the contribution with similar ideas. Exploratory creativity involves the generation of novel ideas by the exploration of defined conceptual spaces and ’creating new’ is adapted from this as users use explores the conceptual space when creating something new. Transformational creativity involves the transformation of some dimension of the space so that new structures can be generated, which could not have arisen before and ’transform’ is adapted from this.
Contribution Similarity: In COFI, similarity refers to the degree of similarity or association between a new contribution compared to the contribution of the partner. Near refers to high similarity with the partner’s contribution and far means less similarity with the partner’s contribution. In this paper, AI agents that use ‘near’ will be referred to as pleasing agents, and agents that use ‘far’ will be referred to as provoking agents.
Miura and Hida demonstrated that high similarity and low similarity in contributions and ideas among collaborators
are both essential for greater gains in creative performance [112]. Both convergent and divergent exploration have their own value in a creative process. Divergent thinking is "thinking that moves away in diverging directions to involve a variety of aspects", whereas convergent thinking is demarcated as "thinking that brings together information focused on something specific" [1]. Basadur et al. asserted that divergent thinking is related to the ideation phase and convergent thinking is related to the evaluation phase [13]. Kantosalo et al. defined pleasing and provoking AI agents, based on how similar their contributions are [79]. A pleasing computational agent follows the human user and complies with the human contribution and preference. Provoking computational agents provoke the human by challenging the human-provided concepts with divergent ideas and dissimilar contribution.
12
12
Rezwana and Maher
4 ANALYSIS OF INTERACTION MODELS IN CO-CREATIVE SYSTEMS USING COFI
4.1 Data
We used COFI to analyze a corpus of co-creative systems to demonstrate COFI’s value in describing the interaction
designs of co-creative systems. We initiated our corpus of co-creative systems using the archival website called the “Library of Mixed-Initiative Creative Interfaces” (LMICI), which archives many of the existing co-creative systems from the literature [2]. Mixed initiative creative systems are often used as an alternative term for co-creative systems [161]. Angie Spoto and Natalia Oleynik created this archive after a workshop on mixed-initiative creative interfaces led by Deterding et al. in 2017 [2, 46]. The archive provides the corresponding literature and other relevant information for each of the systems. LMICI archive consists of 74 co-creative systems from 1996 to 2017. However, we used 73 systems from the LMICI archive due to the lack of information regarding one system. We added 19 co-creative systems to our dataset to include recent co-creative systems (after 2017). We used the keywords ’co-creativity’ and ’human-AI creative collaboration’ to search for existing co-creative systems from 2017 to 2021 in the ACM digital library and Google scholar. Thus, we have 92 co-creative systems in the corpus that we used to analyze the interaction designs using COFI. Table 1 shows all the co-creative systems that we analyzed with corresponding years and references. Figure ?? shows the count of the co-creative systems in our dataset each year.
18 16 16 14 12 10 8 8 8 , 6 6 6 6 s 4 4° 44 4 4 4 : 2 - panalls | o euuaia onoow2ttm*aunwaodoancrwBtnannmwtnvo Be WHO et ono ococ¢codcerdt ad tid tion ti aA ANN HA7noanaoo oodcdaeodccoocooooco$ococpco d& A AN NNN NN NNN NNN NNO NON NOS
Fig. 3. Counts of Co-creative Systems in the Dataset per Year.
We grouped the systems into 13 categories describing their creative domains. The categories are Painting/Drawing/Art,
Culinary, Dance, Music, Storytelling/Narrative/Writing, Game Design, Theatre/Performance, Video/Animation, Pho- tography, Poetry, Industrial and Product Design, Graphic Design and Humor/Comic. In Figure 4, the count of the systems in each category is provided. We see the most common creative domains in the corpus are music, story- telling/narrative/writing, Game design and Painting/Drawing/art. The distribution shows that some creative domains are are not well represented in this dataset or rarely used in developing co-creative systems, for example, culinary, humor, and graphic design.
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems13
Table 1. List of Co-creative Systems in the Dataset Sorted by Year.
Year
Co-creative Systems
1996 1999 2000 2001 2003 2005 2008 2009 2010 2011 2012 2013 Improv [125] GeNotator [149] NEvAr [103] Metasynth [38] Facade [109], continuator [122] LOGTELL [30] CombinFormation[83], REQUEST [131], miCollage[159], BeatBender [91], WEVVA [116] Terrain Sketching [57], JNETIC [14], Synthetic Audience [120], The Poetry Machine [155] SKETCHAWORLD [142], Tanagra [143], Realtime Generation of Harmonic Progressions [50], JamBot. [21], Filter Trouve [32], Clap-along [164], EDME [99], LEMu [99], Shimon [64], Stella [90], Party Quirks [105], Generation of Tracks in a High-end Racing Game [26], ELVIRA [31], Creating Choreography with Interactive Evolutionary Algorithms [51] Spaceship Generator [93], MaestroGenesis [148], PINTER [59], Co-PoeTryMe [119], A formal Architecture of Shared Mental Models [63], Impro-Visor [82] Sentient Sketchbook [94], Dysphagia [141], Viewpoints AI [69], Ropossum [48], COCO Sketch[42], Sentient World[94] CAHOOTS [153], Funky Ikebana [34], StyleMachine [3], Drawing Apprentice [40] Improvised Ensemble Music Making on Touch Screen [108], AceTalk [150], Chor-rnn [36], Cochoreo [27], Evolutionary Procedural 2D Map Generation [138], Danesh [35], Plecto [66], Image-to-Image [68], Robodanza [67], SpeakeSystem [162], TaleBox[28], ChordRipple [135], Robovie [73], Creative Assistant for Harmonic Blending [74], Writing Buddy [135], Recommender for Game Mechanics [104] TOPOSKETCH [154], Trussfab[88], Chimney [113], FabMachine [85], LuminAI [98], GAIA [60], 3Buddy [102], Deeptingle [84] The Image Artist [166], DuetDraw [118], Robocinni [4] In a silent way [110], Metaphoria [58], collabDraw [54], DrawMyPhoto [157] Shimon the Rapper [136], ALYSIA [29], Cobbie [96], WeMonet [92], Co-cuild [45], IEC [123], Creative Sketching Partner [80] BunCho [121], CharacterChat [140], StoryDrawer [165], FashionQ [71] 2017 2018 2019 2020 2021
2014 Chef Watson [126], Kill the Dragon and Rescue the Princess [89], Nehovah [144], Autodesk Dreamcatcher[119] 2015 2016
4.2 Coding Scheme
To analyze the interaction design of the existing co-creative systems, we coded the interaction designs of 92 systems
using COFI. Two coders from our research team independently coded 25% of the systems following COFI. They then achieved consensus through discussing the disagreements in the codes (Kappa Inter-rater reliability 0.79). The rest of the systems were coded by a single coder according to the consensus. For each system, the coding shows all interaction design components according to COFI. All the interaction components of the systems were coded according to the information provided in the corresponding literature. For a specific interaction component, when none of the subcategories are present in the interaction design, we coded it as ‘None’.
4.3 Interaction Design Models among Co-creative Systems
For identifying different interaction models utilized by the co-creative systems in the dataset, we clustered all the
systems using their interaction components. We used K-modes clustering [25, 65] for identifying clusters as the K-modes
14
14
Rezwana and Maher
25 20
Fig. 4. Count of Co-Creative Systems in Different Creative Domains.
algorithm is suitable for categorical data. K-modes clustering is an extension of K-means, but instead of means, this
algorithm uses modes. For demonstrating the cluster centroids, this algorithm uses modes of all the features. We used all the interaction components according to COFI as features. We found three clusters of the systems based on their interaction design (Figure 5). The first cluster includes 67 co-creative systems and thus indicating a dominant interaction model. The second cluster includes 9 systems and the third one includes 16 systems. We used chi-square for determining interaction components that contribute significantly to forming the clusters and found that all of the interaction components are significant factors for the clusters (all P values < 0.05). Figure 5 shows the three major interaction models, including all the interaction components (cluster centroids represented by feature modes).
4.3.1 Cluster 1 - Interaction Design Model for Generative Pleasing AI Agents. The interaction model of the first cluster is the most prevalent as there are 67 systems in this cluster sharing the same or similar model. This dominant interaction model shows that most of the co-creative systems in the dataset utilize turn-taking as the participation style. Therefore, each of the collaborators must wait until the partner finishes their turn. This interaction model uses ’planned’ timing of initiative which is an indication of non-improvisational co-creativity. Hence, most of the systems in the dataset do not support improvisational creativity. This interaction model uses direct manipulation for human to AI intentional communication. However, this model does not incorporate any human to AI consequential communication or AI to human communication. The main task is divided between the collaborators, and the AI agent uses generation as the creative process in most of the systems in this cluster and creates something new without mimicking the user. The degree of similarity in contribution is high. In other words, the AI agent pleases the human by generating contributions that follow along with the contributions made by the human. Mostly, this interaction model is used by non-improvisational systems that generate creative products to please the users.
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems15
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative SysterhS
Interaction between Collaborators Collaboration Style Communication Style Cluster No / Type of | Participation Task Timing of Mimicry Human to Al Human to Al Alto human Alin Clusters style Distribution Initiative Intentional Consequential | Communication Communication | Communication 1/ Generative Pleasing | Turn taking | Task Divided Planned Direct Co-creative Al Agents Non-mimic | Manipulation (67 out of 92 systems) 2 / Improvisational Co- Parallel Single Task | Spontaneous Mimic creative Al Agents a (9 out of 92 systems) Norn 3 / Advisory Co-creative | Turn taking | Task Divided Planned Direct Al Agents Non-mimic | Manipulation {16 out of 92 systems) + Interaction with the Shared Product Creative Product Cluster No / Type of Al Creative Process in Clusters Contribution Contribution Type 1 / Generative Pleasing Co-creative Al Agents Create New High Generation (67 out of 92 systems) 2/ improvisational Co- creative Al Agents Create New High + Low Generation (9 out of 92 systems) 3 / Advisory Co-creative Al Grae Agents Refine High + Low + (16 out of 92 systems) Evaluation
Collaboration Style Communication Style Cluster No / Type of | Participation Task Timing of Mimicry Human to Al Human to Al Alto human Alin Clusters style Distribution Initiative Intentional Consequential | Communication Communication | Communication 1/ Generative Pleasing | Turn taking | Task Divided Planned Direct Co-creative Al Agents Non-mimic | Manipulation (67 out of 92 systems) 2 / Improvisational Co- Parallel Single Task | Spontaneous Mimic creative Al Agents a (9 out of 92 systems) Norn 3 / Advisory Co-creative | Turn taking | Task Divided Planned Direct Al Agents Non-mimic | Manipulation {16 out of 92 systems)
Interaction with the Shared Product Creative Product Cluster No / Type of Al Creative Process in Clusters Contribution Contribution Type 1 / Generative Pleasing Co-creative Al Agents Create New High Generation (67 out of 92 systems) 2/ improvisational Co- creative Al Agents Create New High + Low Generation (9 out of 92 systems) 3 / Advisory Co-creative Al Grae Agents Refine High + Low + (16 out of 92 systems) Evaluation
Fig. 5. Interaction Designs for the Three Clusters of Co-creative Systems.
An example of a system that uses this interaction design is Emotion Driven Music Engine (EDME) [99]. EDME
generates music based on the emotions of the user. The user selects an emotion, and EDME plays music to match that emotion. This system works in a turn-taking way with the user. The timing of initiative-taking is planned as the system will always respond after the human finishes selecting their emotion. The task is divided between the collaborators as the user defines the conceptual space by choosing an emotion from the interface and the system generates the music according to that emotion. The system contributes to the collaboration by creating something new and without mimicking the user. The system creates music that is associated with and similar to the user-defined emotion. The biggest challenge here is the human can not give any feedback or communicate with the system regarding the generated music. The system can not track any consequential information from the human such as facial expression, eye gaze and embodied gestures. Also, the system can not communicate any relevant information to the user such as providing additional information regarding the contribution or visual cues.
4.3.2 Cluster 2 - Interaction Design Model for Improvisational AI Agents. The interaction design for the systems in cluster 2 uses parallel participation style where both agents can contribute simultaneously. The task distribution for
16
16
Rezwana and Maher
Rezwana and Maher
these systems is usually ‘same task’ and most of the systems contribute by generating in the creative process. Most
of the systems in this cluster contribute to the collaboration by creating something new and these systems can do both mimicry and non-mimicry. The degree of similarity in terms of users’ contribution can be both high and low. This interaction model employs spontaneous initiative-taking while both co-creators contribute to the same task with parallel participation style, indicating improvisational co-creativity. Systems in this cluster do not have any way of communication between the user and the system, and a lack of communication in improvisational co-creativity can reduce the collaboration quality and engagement [64].
An example system for this cluster is LuminAI, where human users improvise with virtual AI agents in real time to
create a dance performance [98]. Users move their body and the AI agent will respond with an improvised movement of its own. Both the AI agent and users can dance simultaneously and they take initiatives spontaneously. Collaborators contribute to only a single task, generating dance movements. The AI can create new movements and transform user movements while it can do both mimicry and non-mimicry. The dance movements can be similar or different from the user. There is no way the user can deliberately communicate with the system or the system can communicate with the user. Here, the creative product itself is an embodied product but the system can not collect any consequential information from the user such as eye gaze, facial expression or additional gestures other than dance moves.
4.3.3 Cluster 3 - Interaction Design Model for Advisory AI Agents. The third cluster includes systems that work in a turn taking manner and the task is divided into subtasks between the collaborators. The initiative taking is planned prior to the collaboration. Users can communicate to the system through direct manipulation, but there is no human to AI consequential communication channel or AI to human communication channel. The most notable attribute for this interaction model is both the generation and evaluation ability of the AI agent unlike the other two interaction models where the AI agent can only contribute by generating. Systems with this interaction model can act as an adviser to the user by evaluating the contribution of the user. Most of these systems in this cluster contribute by refining the contribution of the user. These systems do not mimic the contribution of the user and the degree of contribution similarity can be both high and low.
An example of co-creative systems that utilize this model is Sentient World which assists video game designers in
creating maps [94]. The designer creates a rough terrain sketch, and Sentient World evaluates the map created by the designer and then generates several refined maps as suggestions. This system works in a turn-taking manner with the user, and the initiative taking is planned. The AI agent uses both generation and evaluation as creative processes by generating maps and evaluating maps created by the user. The user can communicate with the system minimally with direct manipulation (clicking buttons) for providing user preference for the maps. The AI agent can not communicate any explicit information to the human and can not collect any consequential information from the user such as facial expression, eye gaze and embodied information. Sentient World can both create new maps and refine the map created by the user. The system does not mimic the user contribution and the similarity with user contribution is high.
4.4 Adoption Rate of the Interaction Components used in the Systems
Figure 6 shows the adoption rate of each of the interaction components in COFI used in the systems. The first section of the table comprises interaction components under collaboration style. Turn-taking is the most common participation style in the dataset (89.1%), while just 10.9% of the systems use parallel participation. Parallel participation is used by the systems that engage in performative co-creation. Most of the co-creative systems in the dataset use task-divided distribution of tasks (75%) as they work on separate creative subtasks. 25% systems use same task as their task distribution as both the user and the AI work on the same creative task/s. Timing of initiative is planned in 86.8% of the systems and
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems17
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AlI Co-Creative Systerhs
the rest of the systems take spontaneous initiatives without any fixed plan. For mimicry, 90.2% of the systems employ
non-mimicry, 8.7% systems use both mimicry and non-mimicry, and only one system (1.1%) uses mimicry.
The second category, communication style, is concerned with different communication channels used by the co- creative systems. 69.6% systems use direct manipulation as the human to AI communication channel. Voice, embodied and text is used rarely by the systems. 3.3% of the systems use embodied communication as human to AI consequential communication and most of the systems (95.7%) do not track and collect any consequential information from the user. For AI to human communication, most systems do not have any channels. In the next section, we talk about the trend in communication channels in co-creative systems.
In the creative process category, it is noticeable that the majority of the systems (79.3%) employ generation as the
creative process and 15.2% of the systems use both generation and evaluation as the creative processes. Definition as a creative process is rarely used in the co-creative systems.
In the creative product category, contribution type is the first interaction component and most co-creative systems use create new (59.8%). 10.9% of the systems use both create new and refine as the contribution type. 8.7% of the systems use both create new and extend as the contribution type.
participation stl Parallel Turn-Taking g articipation Style 10.90% 89.10% a a Same Task Task Divided 5 Task Distribution & 25% 75% 5 Timing of Initia Spontaneous Planned iming of Initiative a 8 13.20% 86.80% & a Mimic Non-Mimic Both Mimicry 1.10% 90.20% 8.70% © Human to Al Voice Direct | Embodied Text Direct Manipulation + Voice + Direct ¢ Intentional Manipulation Embodied Manipulation & Communication 1.10% 69.60% 3.30% | 2.20% 1.10% 1.10% 21.70% s Human to Al Gaze Facial Expression Biometric Embodied q : = Consequential 0% 0% 1.10% 3.30% 95.70% 5 Communication é - - E Al to Human Speech Text Embodied | Haptic Visual Embodied + | Embodied + 8 imal Voice Voice + text Communication 1.10% 4.30% 3.30% 0% 5.50% 2.10% 1.10% 82.60% £48 Generate Evaluate | Define Generate + Define Generate + Evaluate an 3 §a 79.30% 2.20% | 1.10% 2.20% 15.20% 3 ; create New | Extend | Transform | Refine | CreateNew | CreateNew | Create New | Transform 8 Contribute Type +Refine | +Extend +Transform | +Refine & g 59.80% 4.30% 2.20% | 5.40% | 10.90% 8.70% 7.60% 1.10% 3 Contribution Low High Both Sj Similarity 2.20% 69.60% 27.10% 1.10%
Fig. 6. Adoption Rate of Each Interaction Component used in the Co-creative Systems in the Dataset.
4.5 Communication in Interaction Models
Our analysis identifies a significant gap in the use of the components of interaction in the co-creative systems in
this dataset: a lack of communication channels between humans and AI (Figure 3). In co-creative systems, subtle communication happens during the creative process through contributions. For example, in a collaborative drawing co- creative system where no communication channel exists between the user and the AI, subtle interaction happens through the shared product as co-creators make sense of each other’s contribution and then make a new contribution. Designing
18
18
Rezwana and Maher
different modalities for communication between the user and the AI has the potential to improve the coordination and
quality of collaboration. However, 82.6% of the systems cannot communicate any feedback or information directly to the human collaborator other than communicating through the shared product. The rest of the systems communicate with the users through text, embodied communication, voice, or visuals (image and animation). For human to AI consequential communication, 95.7% of the systems can not capture any consequential information from the human user such as facial expression, biometric data, gaze and postures. However, consequential communication can increase user engagement in collaboration. For the intentional communication from human to AI, most of the systems use direct manipulation (clicking buttons or selecting options) to communicate (69.6%). In other words, in most of the systems, users can only minimally communicate with the AI or provide instructions to the AI directly, for example, through clicking buttons or using sliders. 21.7% of the systems have no way for the user to communicate with the AI intentionally. The rest of the systems use other intentional communication methods, like embodied communication or voice or text.
Communication Types Human to Al Intentional Human to Al Consequential Alto Human Communication Communication Communication Communication Channels 21.70% 95.70% 82.60% Direct Manipulation 68.50% 0% 0% Embodied 3.30% 3.20% 3.30% Text 0% 0% 4.30% Others 6.50% 1.10% 9.80%
Fig. 7. Distribution of Different Kinds of Communication between Humans and AI in the Co-Creative Systems in the Dataset.
Some of the systems in our dataset utilize multiple communication channels. Shimon is a robot that plays the marimba
alongside a human musician [64]. Using embodied gestures as visual cues to anticipate each other’s musical input, Shimon and the musician play an improvised song, responding to each other in real-time. The robot and the human both use intentional embodied gestures as visual cues to communicate turn-taking and musical beats. Therefore, this system includes human to AI intentional communication and AI to human communication. Findings from a user study using Shimon demonstrate that visual cues aid synchronization during improvisational co-creativity. Another system with interesting communication channels is Robodanza, a humanoid robot that dances with humans [67]. Human dancers use intentional communication by intentionally touching the robot’s head in order to awaken it and the robot tracks human faces to detect consequential information. The robot is able to detect the noise and rhythm of hands clapping and tapping on a table. The robot can move its head in the direction of the perceived rhythms and move its hand following the perceived tempo for communicating its status to the human users.
5 DISCUSSION
We develop and describe COFI to provide a framework for analyzing, comparing, and designing interaction in co-creative
systems. Researchers can use COFI to explore the possible spaces of interaction for choosing an appropriate interaction design for a specific system. COFI can be beneficial while investigating and interpreting the interaction design of existing co-creative systems. As a framework, COFI is expandable as other interaction components are added in the future. We analyzed the interaction models of 92 existing co-creative systems using COFI to demonstrate its value in investigating the trends and gaps in the existing interaction designs in co-creative systems. We identified three major clusters of interaction models utilized by these systems. In the following paragraphs, we explain the interaction models
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems19
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systerh9
and discuss the potential for further research in specific interaction components. These interaction models can be useful
when designing a co-creative system since they can help identify appropriate interaction components and determine if interaction components should be modified for the corresponding type of co-creative AI agent.
The most common interaction model in our dataset is suitable for generative co-creative AI agents that follow and
comply with human contributions and ideas by generating similar contributions. Provoking agents are rare in the literature, and in fact, such a stance seems to be opposed by some in the literature. For example, Tanagra’s creators ensured "that Tanagra does not push its own agenda on the designer" [143]. However, both pleasing and provoking agents have use-cases within co-creative systems [79]. For example, if a user is trying to produce concepts or ideas that convey their specific style, a pleasing agent that contributes similar ideas is more desirable. However, if a user is searching for varied ideas, a provoking agent with different contributions is an ideal creative partner as it will provide more divergent ideas. This model can be improved with consequential communication tracking from users and AI to human communication.
The second interaction model is suitable for improvisational AI agents as it uses spontaneous initiative-taking
and both agents work on the same task in parallel. Additionally, this model includes both mimicry and non-mimicry, unlike the other models which direct the AI to take proper action in an improvisational performance. This model can be utilized as a guide while designing interaction in an improvisational co-creative system. However, this model does not include any intentional or consequential communication channels from humans to AI or AI to humans, which can negatively impact the collaboration quality and user experience, especially in improvisational co-creativity where communication is the key. Hoffman et al. asserted that communication aids synchronization and coordination in improvisational co-creativity [64]. Further research can extend this model by including or extending human-AI communication channels.
The third interaction model is used by co-creative AI agents that work as an advisor by evaluating user’s contributions
and contributing to the shared product as a generator. In product-based co-creation, AI agents that can both generate and evaluate help the user generate precise creative ideas and artifacts. For example, in industrial design, the co-creative AI agent can help in creative ideation by evaluating the user-provided concept for a robust and error-free design and also help in the generation of the artifact with divergent or convergent ideas [88]. AI agents that use this model can refine the user’s contributions in contrast to the other models. The limitations of this model include the absence of human to AI consequential communication and AI to human communication.
A notable finding from the analysis of this dataset is the lack of AI agents defining the conceptual space as the
creative process (only 4 out of 92). Most of the systems in the corpus contribute by generating and some contribute by evaluating the human contributions. In the context of co-creativity, defining the conceptual space is an essential task. An AI agent can define the conceptual space without any guidance from the user. For example, the Poetry Machine is a poetry generator that prompts the user with images that users respond to with a line of poetry [2, 155] and then organizes the lines of poetry into a poem. An AI agent can also suggest multiple ideas for the conceptual space while the user can select their preferred one. TopoSketch [154] generates animations based on a photo of a face provided by the human and displays various facial expressions as ideas for the final animation. CharacterChat inspires writers to create fictional characters through a conversation. The bot converses with the user to guide the user to define different attributes of the fictional character. Humans may desire inspiration for creative concepts and ideas at the beginning of a creative journey. Creative brainstorming and defining creative concepts can be potential research areas for co-creative systems. There is potential for designing new co-creative systems that both define the creative conceptual space and explore it with the user.
20
20
Rezwana and Maher
The most significant area of improvement in all of the interaction models identified is communication, the key to coordination between two agents. Providing feedback, instructions or conveying information about the contribution is essential for creative collaboration. Without any communication channel between the co-creators, the creation becomes a silent game [146, 147] as collaborators can not express any concerns and provide feedback about their contributions. Communication through the creative product is subtle communication and may not be enough to maintain the coordination and collaboration quality. Most of the existing co-creative systems in our dataset have minimal communication channels, and this hinders the collaboration ability of the AI agent and the interactive experience. Most of the systems in the dataset utilize only direct manipulation for communicating intentional information from the users. Direct manipulations include clicking buttons and using sliders for rating AI contribution, providing simple instructions and collecting user preferences. For most systems, direct manipulation provides a way for minimal communication and does not provide users with a way to communicate more broadly. Very few systems in the dataset use other communication channels other than direct manipulation for human to AI intentional communication. For example, AFAOSMM (2012) [63] is a theatre-based system that uses gestures as intentional communication and Robodanza (2016) [67] uses embodied movements along with direct manipulation for intentional communication. Human to AI consequential communication is rarely used in the systems but an effective way to improve creative collaboration. It has been demonstrated that humans, during an interaction, can reason about others’ ideas, goals, intentions and predict partners’ behaviors, a capability called Theory of Mind (ToM) [10, 127, 163]. Having a Theory of Mind allows us to infer the mental states of others that are not directly observable, enabling us to engage in daily interaction. The ability to intuit what others think or want from brief nonverbal interactions is crucial to our social lives as we see others’ behavior not just as motions but as an intentional action. In a collaboration, Theory of Mind is essential to observe and interpret the behavior of a partner, maintain coordination and act accordingly. Collecting unintentional information from the human partner has the potential to improve the collaboration and user experience in a human-AI co-creation, and may lead to enabling AI to mimic the Theory of Mind ability of humans. The technology for collecting consequential information from the user includes eye trackers, facial expression trackers, gesture recognition devices, and cognitive signal tracking devices.
AI to human communication channels are also rarely utilized in the identified interaction models. However, it is
essential to understand the AI partner by the users to build an engaging and trustworthy partnership. Many intelligent systems lack the core interaction design principles such as transparency and explainability and it makes them hard to understand and use [49]. To address the challenge of transparency of AI interaction should be designed to support users in understanding and dealing with intelligent systems despite their complex black-box nature. When AI can communicate its decision making process to users and explain its contribution, the system becomes more comprehensible and transparent to build a partnership. So, AI to human communication is critical for interaction design in co-creative systems. Visuals, text, voice, embodied, and haptic feedback can be used to convey information, suggestions, and feedback to the users. There is a distinction between AI to human communication and AI steerability. For example, LuminAI is a co-creative AI that dances with humans [98]. Here the generated creative product is dance, an embodied product created by gestures and embodied movements. However, AI can only communicate by contributing to the product and does not directly communicate to humans. Humans can steer the AI by contributing different embodied contributions to the final product and the AI generates contributions based on the user movements. This is different from embodied communication that intentionally communicates that the collaborator is doing great with a thumbs up. The gap in interaction design in terms of communication is an area of future research for the field of co-creativity. User experiments with different interaction models can help identify effective interaction design for different types
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems21
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Syster&s
of co-creative systems [129]. COFI provides a common framework for analyzing the interaction designs in existing
co-creative systems to identify trends and gaps in existing interaction designs for designing improved interaction in a co-creative system.
AI is being used increasingly in collaborative spaces, for example, recommender systems, self-driving vehicles, and
health care. Much AI research has focused on improving the intelligence or ability of agents and algorithms [8]. As AI technology shifts from computers to everyday devices, AI needs social understanding and cooperative intelligence to integrate into society and our daily lives. AI is, however, a novice when it comes to collaborating with humans [37]. The term ’human-AI collaboration’ has emerged in recent work studying user interaction with AI systems [7, 24, 118, 151]. This marks both a shift to a collaborative from an automated perspective of AI, and the advancement of AI capabilities to be a collaborative partner in some domains. Ashktorab et al. asserted that human-AI co-creation could be a starting point of designing and developing AI that can cooperate with humans [8]. Human-AI interaction has many challenges and is difficult to design [160]. HCI deals with complex technologies, including research to mitigate unexpected consequences. A critical first step in designing valuable human-AI interactions is to identify technical challenges, articulate the unique qualities of AI that make it difficult to design, and then develop insights for future research [160]. Building a fair and effective AI application is considered difficult due to the complexity both in defining the goals and algorithmically achieving the defined goals. Prior research has addressed these challenges by promoting interaction design guidelines [6, 111]. In this paper, we provide COFI as a framework to describe the possible interaction spaces in human-AI creative collaboration and identify existing trends and gaps in existing interaction designs. COFI can also be useful in AI research and HCI research to design cooperative AI in different domains. COFI will expand as we learn and identify more aspects of human-AI collaboration.
6 LIMITATIONS
The identification of clusters of interaction models in human-AI co-creative systems is limited to the specific dataset
that we used for the analysis. Although we believe this sample contains a large population, the systems in the dataset are limited by the expectations and technologies at the time of publication. We expect the clusters and descriptions of interaction models for co-creative systems will change over time.
7 CONCLUSIONS
This paper develops and describes the COFI as a framework for modeling interaction in co-creative systems. COFI
was used to analyze the interaction design of 92 co-creative systems from the literature. Three interaction models for co-creative systems were identified: generative pleasing agents, improvisational agents, and advisory agents. When developing a co-creative system, these interaction models can be useful to choose suitable interaction components for corresponding co-creative systems. COFI is broader than the interaction designs utilized in any specific co-creative system in the data set. The findings show that the space of possibilities is underutilized. While the analysis is limited to the data set, it demonstrates that COFI can be a tool for identifying research directions and research gaps in the current space of co-creativity. COFI revealed a general lack of communication in co-creative systems within the dataset. In particular, very few systems incorporate AI to human communication, communication channels other than direct manipulation for collecting intentional information from humans and gathering consequential communication data, such as eye gaze, biometric data, gesture, and emotion. This gap demonstrates an area of future research for the field of co-creativity. We argue that COFI will provide useful guidelines for interaction modeling while developing co-creative systems. As a framework, COFI is expandable as other interaction components can be added to it in the future. User
22
22
Rezwana and Maher
experiments with different interaction models can help identify effective interaction design for different types of
co-creative systems and lead to insights into factors that affect user engagement.
REFERENCES
[1] [n.d.]. Dictionary, Encyclopedia and Thesaurus. https://www.thefreedictionary.com/ [2] [n.d.]. Library of Mixed-Initiative Creative Interfaces. http://mici.codingconduct.cc/. (Accessed on 05/31/2020). [3] 2015. Style Machine Lite. https://metacreativetech.com/products/stylemachine-lite/ [4] Margareta Ackerman, James Morgan, and Christopher Cassion. 2018. Co-Creative Conceptual Art. In Proceedings of the Ninth International
Conference on Computational Creativity. 1–8.
[5] Aftab Alam, Sehat Ullah, Shah Khalid, Fakhrud Din, and Ihsan Rabbi. 2013. Computer Supported Collaborative Work (CSCW) and Network Issues: A Survey. International Information Institute (Tokyo). Information 16, 11 (2013), 7995.
[6] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–13. [7] Ines Arous, Jie Yang, Mourad Khayati, and Philippe Cudré-Mauroux. 2020. Opencrowd: A human-ai collaborative approach for finding social
influencers via open-ended answers aggregation. In Proceedings of The Web Conference 2020. 1851–1862.
[8] Zahra Ashktorab, Q Vera Liao, Casey Dugan, James Johnson, Qian Pan, Wei Zhang, Sadhana Kumaravel, and Murray Campbell. 2020. Human-ai collaboration in a cooperative game setting: Measuring social perception and outcomes. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–20.
[9] Ronald M Baecker. 1993. Readings in groupware and computer-supported cooperative work: Assisting human-human collaboration. Elsevier. [10] Chris L Baker, Julian Jara-Ettinger, Rebecca Saxe, and Joshua B Tenenbaum. 2017. Rational quantitative attribution of beliefs, desires and percepts
in human mentalizing. Nature Human Behaviour 1, 4 (2017), 1–10.
[11] Kevin Baker, Saul Greenberg, and Carl Gutwin. 2001. Heuristic evaluation of groupware based on the mechanics of collaboration. In IFIP International Conference on Engineering for Human-Computer Interaction. Springer, 123–139.
[12] Kim A Bard. 1992. Intentional behavior and intentional communication in young free-ranging orangutans. Child development 63, 5 (1992), 1186–1197.
[13] Min Basadur and Peter A Hausdorf. 1996. Measuring divergent thinking attitudes related to creative problem solving and innovation management. Creativity Research Journal 9, 1 (1996), 21–32.
[14] Steve R Bergen. 2009. Evolving stylized images using a user-interactive genetic algorithm. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers. 2745–2752.
[15] Margaret A Boden. 1998. Creativity and artificial intelligence. Artificial Intelligence 103, 1-2 (1998), 347–356. [16] Oliver Bown. 2014. Empirically Grounding the Evaluation of Creative Systems: Incorporating Interaction Design.. In ICCC. 112–119. [17] Oliver Bown. 2015. Player Responses to a Live Algorithm: Conceptualising computational creativity without recourse to human comparisons?. In
ICCC. 126–133.
[18] Oliver Bown and Andrew R Brown. 2018. Interaction design for metacreative systems. In New Directions in Third Wave Human-Computer Interaction: Volume 1-Technologies. Springer, 67–87.
[19] Oliver Bown, Kazjon Grace, Liam Bray, and Dan Ventura. 2020. A Speculative Exploration of the Role of Dialogue in Human-ComputerCo-creation.. In ICCC. 25–32.
[20] Ingar Brinck. 2008. The role of intersubjectivity in the development of intentional communication. The shared mind: Perspectives on intersubjectivity (2008), 115–140.
[21] Andrew R Brown, Toby Gifford, and Rene Wooller. 2010. Generative Music Systems for Live Performance. In First International Conference on Computational Intelligence. 290.
[22] Diletta Cacciagrano and Flavio Corradini. 2001. On synchronous and asynchronous communication paradigms. In Italian Conference on Theoretical Computer Science. Springer, 256–268.
[23] Sara Helms Cahan and Jennifer H Fewell. 2004. Division of labor and the evolution of task sharing in queen associations of the harvester ant Pogonomyrmex californicus. Behavioral ecology and sociobiology 56, 1 (2004), 9–17.
[24] Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. " Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proceedings of the ACM on Human-computer Interaction 3, CSCW (2019), 1–24. [25] Fuyuan Cao, Jiye Liang, Deyu Li, Liang Bai, and Chuangyin Dang. 2012. A dissimilarity measure for the k-Modes clustering algorithm. Knowledge-
Based Systems 26 (2012), 120–127.
[26] Luigi Cardamone, Daniele Loiacono, and Pier Luca Lanzi. 2011. Interactive evolution for the procedural generation of tracks in a high-end racing game. In Proceedings of the 13th annual conference on Genetic and evolutionary computation. 395–402.
[27] Kristin Carlson, Philippe Pasquier, Herbert H Tsang, Jordon Phillips, Thecla Schiphorst, and Tom Calvert. 2016. Cochoreo: A generative feature in idanceforms for creating novel keyframe animation for choreography. In Proceedings of the 7th International Conference on Computational Creativity. 380–387.
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems23
[28] O Castaño Pérez, BA Kybartas, and Rafael Bidarra. 2016. TaleBox: A mobile game for mixed-initiative story creation. (2016). [29] Lee Cheatley, Margareta Ackerman, Alison Pease, and Wendy Moncur. 2020. Co-creative songwriting for bereavement support. In Eleventh
International Conference on Computational Creativity: ICCC’20. Association for Computational Creativity, 33–41.
[30] Angelo EM Ciarlini, Cesar T Pozzer, Antonio L Furtado, and Bruno Feijó. 2005. A logic-based tool for interactive generation and dramatization of
stories. In Proceedings of the 2005 ACM SIGCHI International Conference on Advances in computer entertainment technology. 133–140.
[31] Simon Colton, Michael Cook, and Azalea Raad. 2011. Ludic considerations of tablet-based evo-art. In European Conference on the Applications of Evolutionary Computation. Springer, 223–233.
[32] Simon Colton, Jeremy Gow, Pedro Torres, and Paul A Cairns. 2010. Experiments in Objet Trouvé Browsing.. In ICCC. 238–247. [33] Simon Colton, Geraint A Wiggins, et al. 2012. Computational creativity: The final frontier?. In Ecai, Vol. 12. Montpelier, 21–26. [34] Kate Compton and Michael Mateas. 2015. Casual Creators.. In ICCC. 228–235. [35] Michael Cook, Jeremy Gow, and Simon Colton. 2016. Danesh: Helping bridge the gap between procedural generators and their output. (2016). [36] Luka Crnkovic-Friis and Louise Crnkovic-Friis. 2016. Generative choreography using deep learning. arXiv preprint arXiv:1605.06921 (2016). [37] Allan Dafoe, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, and Thore Graepel. 2021. Cooperative AI: machines must learn to find
common ground.
[38] Palle Dahlstedt. 2001. A MutaSynth in parameter space: interactive composition through evolution. Organised Sound 6, 2 (2001), 121–124. [39] Nicholas Davis, Chih-Pin Hsiao, Yanna Popova, and Brian Magerko. 2015. An enactive model of creativity for computational collaboration and
co-creation. In Creativity in the Digital Age. Springer, 109–133.
[40] Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, Sanat Moningi, and Brian Magerko. 2015. Drawing apprentice: An enactive co-creative agent for artistic collaboration. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition. 185–186.
[41] Nicholas Davis, Chih-PIn Hsiao, Kunwar Yashraj Singh, Lisa Li, and Brian Magerko. 2016. Empirically studying participatory sense-making in abstract drawing with a co-creative cognitive agent. In Proceedings of the 21st International Conference on Intelligent User Interfaces. 196–207. [42] Nicholas Mark Davis. 2013. Human-computer co-creativity: Blending human and computational creativity. In Ninth Artificial Intelligence and
Interactive Digital Entertainment Conference.
[43] Hanne De Jaegher. 2013. Embodiment and sense-making in autism. Frontiers in integrative neuroscience 7 (2013), 15. [44] Hanne De Jaegher and Ezequiel Di Paolo. 2007. Participatory sense-making. Phenomenology and the cognitive sciences 6, 4 (2007), 485–507. [45] Manoj Deshpande. 2020. Towards Co-build: An Architecture Machine for Co-creative Form-making. Ph.D. Dissertation. The University of North
Carolina at Charlotte.
[46] Sebastian Deterding, Jonathan Hook, Rebecca Fiebrink, Marco Gillies, Jeremy Gow, Memo Akten, Gillian Smith, Antonios Liapis, and Kate Compton. 2017. Mixed-initiative creative interfaces. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 628–635.
[47] Arne Dietrich. 2004. The cognitive neuroscience of creativity. Psychonomic bulletin & review 11, 6 (2004), 1011–1026. [48] Steve DiPaola, Graeme McCaig, Kristin Carlson, Sara Salevati, and Nathan Sorenson. 2013. Adaptation of an Autonomous Creative Evolutionary
System for Real-World Design Application Based on Creative Cognition.. In ICCC. 40–47.
[49] Malin Eiband, Daniel Buschek, and Heinrich Hussmann. 2021. How to support users in understanding intelligent systems? Structuring the discussion. In 26th International Conference on Intelligent User Interfaces. 120–132.
[50] Arne Eigenfeldt and Philippe Pasquier. 2010. Realtime generation of harmonic progressions using controlled Markov selection. In Proceedings of ICCC-X-Computational Creativity Conference. 16–25.
[51] Jonathan Eisenmann, Benjamin Schroeder, Matthew Lewis, and Rick Parent. 2011. Creating choreography with interactive evolutionary algorithms. In European Conference on the Applications of Evolutionary Computation. Springer, 293–302.
[52] Paul Ekman and Wallace V Friesen. 1969. Nonverbal leakage and clues to deception. Psychiatry 32, 1 (1969), 88–106. [53] Daniel Fallman. 2008. The interaction design research triangle of design practice, design studies, and design exploration. Design Issues 24, 3 (2008),
4–18.
[54] Judith E Fan, Monica Dinculescu, and David Ha. 2019. Collabdraw: an environment for collaborative sketching with an artificial agent. In Proceedings of the 2019 on Creativity and Cognition. 556–561.
[55] Valentina Fantasia, Hanne De Jaegher, and Alessandra Fasulo. 2014. We can work it out: an enactive look at cooperation. Frontiers in psychology 5 (2014), 874.
[56] Frank Fischer and Heinz Mandl. 2003. Being there or being where? Videoconferencing and cooperative learning. na. [57] James Gain, Patrick Marais, and Wolfgang Straßer. 2009. Terrain sketching. In Proceedings of the 2009 symposium on Interactive 3D graphics and
games. 31–38.
[58] Katy Ilonka Gero and Lydia B Chilton. 2019. Metaphoria: An algorithmic companion for metaphor creation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[59] Stephen Gilroy, Julie Porteous, Fred Charles, and Marc Cavazza. 2012. Exploring passive user interaction for adaptive narratives. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces. 119–128.
[60] Ashok K Goel and Spencer Rugaber. 2017. GAIA: A CAD-like environment for designing game-playing agents. IEEE Intelligent Systems 32, 3 (2017), 60–67.
24
24
Rezwana and Maher
[61] Carl Gutwin, Saul Greenberg, and Mark Roseman. 1996. Workspace awareness in real-time distributed groupware: Framework, widgets, and evaluation. In People and Computers XI. Springer, 281–298.
[62] Matthew Guzdial and Mark Riedl. 2019. An interaction framework for studying co-creative ai. arXiv preprint arXiv:1903.09709 (2019). [63] Rania Hodhod and Brian Magerko. 2016. Closing the cognitive gap between humans and interactive narrative agents using shared mental models.
In Proceedings of the 21st International Conference on Intelligent User Interfaces. 135–146.
[64] Guy Hoffman and Gil Weinberg. 2011. Interactive improvisation with a robotic marimba player. Autonomous Robots 31, 2-3 (2011), 133–153. [65] Zhexue Huang. 1997. Clustering large data sets with mixed numeric and categorical values. In Proceedings of the 1st pacific-asia conference on
knowledge discovery and data mining,(PAKDD). Singapore, 21–34.
[66] Steffan Ianigro and Oliver Bown. 2016. Plecto: a low-level interactive genetic algorithm for the evolution of audio. In International Conference on Computational Intelligence in Music, Sound, Art and Design. Springer, 63–78.
[67] I Infantino, A Augello, A Manfré, G Pilato, and F Vella. 2016. Robodanza: Live performances of a creative dancing humanoid. In Proceedings of the Seventh International Conference on Computational Creativity. 388–395.
[68] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1125–1134.
[69] Mikhail Jacob, Alexander Zook, and Brian Magerko. 2013. Viewpoints AI: Procedurally Representing and Reasoning about Gestures.. In DiGRA conference.
[70] Kyle E Jennings, Dean Keith Simonton, and Stephen E Palmer. 2011. Understanding exploratory creativity in a visual domain. In Proceedings of the 8th ACM conference on Creativity and cognition. 223–232.
[71] Youngseung Jeon, Seungwan Jin, Patrick C Shih, and Kyungsik Han. 2021. FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.
[72] Rex Eugene Jung, Brittany S Mead, Jessica Carrasco, and Ranee A Flores. 2013. The structure of creative cognition in the human brain. Frontiers in human neuroscience 7 (2013), 330.
[73] Peter H Kahn, Takayuki Kanda, Hiroshi Ishiguro, Brian T Gill, Solace Shen, Jolina H Ruckert, and Heather E Gary. 2016. Human creativity can be facilitated through interacting with a social robot. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 173–180.
[74] Maximos Kaliakatsos-Papakostas, Roberto Confalonieri, Joseph Corneli, Asterios Zacharakis, and Emilios Cambouropoulos. 2016. An argument- based creative assistant for harmonic blending. arXiv preprint arXiv:1603.01770 (2016).
[75] Nabil N Kamel and Robert M Davison. 1998. Applying CSCW technology to overcome traditional barriers in group interactions. Information & Management 34, 4 (1998), 209–219.
[76] Anna Kantosalo and Anna Jordanous. 2020. Role-based perceptions of computer participants in human-computer co-creativity. AISB. [77] Anna Kantosalo, Prashanth Thattai Ravikumar, Kazjon Grace, and Tapio Takala. 2020. Modalities, Styles and Strategies: An Interaction Framework
for Human-Computer Co-Creativity.. In ICCC. 57–64.
[78] Anna Kantosalo, Jukka M Toivanen, Ping Xiao, and Hannu Toivonen. 2014. From Isolation to Involvement: Adapting Machine Creativity Software to Support Human-Computer Co-Creation.. In ICCC. 1–7.
[79] Anna Kantosalo and Hannu Toivonen. 2016. Modes for creative human-computer collaboration: Alternating and task-divided co-creativity. In Proceedings of the seventh international conference on computational creativity. 77–84.
[80] Pegah Karimi, Jeba Rezwana, Safat Siddiqui, Mary Lou Maher, and Nasrin Dehbozorgi. 2020. Creative sketching partner: an analysis of human-AI co-creativity. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 221–230.
[81] Jody Koenig Kellas and April R Trees. 2005. Rating interactional sense-making in the process of joint storytelling. The sourcebook of nonverbal measures: Going beyond words (2005), 281.
[82] Robert M Keller. 2012. Continuous improvisation and trading with Impro-Visor. (2012). [83] Andruid Kerne, Eunyee Koh, Steven M Smith, Andrew Webb, and Blake Dworaczyk. 2008. combinFormation: Mixed-initiative composition of
image and text surrogates promotes information discovery. ACM Transactions on Information Systems (TOIS) 27, 1 (2008), 1–45.
[84] Ahmed Khalifa, Gabriella AB Barros, and Julian Togelius. 2017. Deeptingle. arXiv preprint arXiv:1705.03557 (2017). [85] Jeeeun Kim, Haruki Takahashi, Homei Miyashita, Michelle Annett, and Tom Yeh. 2017. Machines as co-designers: A fiction on the future of human-fabrication machine interaction. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 790–805.
[86] Gary Klein, Brian Moon, and Robert R Hoffman. 2006. Making sense of sensemaking 1: Alternative perspectives. IEEE intelligent systems 21, 4 (2006), 70–73.
[87] Jon Kolko. 2010. Thoughts on interaction design. Morgan Kaufmann. [88] Robert Kovacs, Anna Seufert, Ludwig Wall, Hsiang-Ting Chen, Florian Meinel, Willi Müller, Sijing You, Maximilian Brehm, Jonathan Striebel, Yannis Kommana, et al. 2017. Trussfab: Fabricating sturdy large-scale structures on desktop 3d printers. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2606–2616.
[89] Iván M Laclaustra, José Ledesma, Gonzalo Méndez, and Pablo Gervás. 2014. Kill the Dragon and Rescue the Princess: Designing a Plan-based Multi-agent Story Generator.. In ICCC. 347–350.
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems25
[90] Carlos León. 2011. Stella-a story generation system for generic scenarios. In Proceedings of the Second International Conference on Computational Creativity.
[91] Aaron Levisohn and Philippe Pasquier. 2008. BeatBender: subsumption architecture for autonomous rhythm generation. In Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology. 51–58.
[92] Zhuying Li, Yan Wang, Wei Wang, Stefan Greuter, and Florian’Floyd’ Mueller. 2020. Empowering a Creative City: Engage Citizens in Creating Street Art through Human-AI Collaboration. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8. [93] Antonios Liapis, Georgios N Yannakakis, and Julian Togelius. 2012. Co-creating game content using an adaptive model of user taste. In 3rd
International Conference on Computational Creativity.
[94] Antonios Liapis, Georgios N Yannakakis, and Julian Togelius. 2013. Sentient world: Human-based procedural cartography. In International Conference on Evolutionary and Biologically Inspired Music and Art. Springer, 180–191.
[95] Antonios Liapis, Georgios N Yannakakis, and Julian Togelius. 2014. Computational game creativity. ICCC. [96] Yuyu Lin, Jiahao Guo, Yang Chen, Cheng Yao, and Fangtian Ying. 2020. It is your turn: collaborative ideation with a co-creative robot through
sketch. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[97] Tao Liu, Hirofumi Saito, and Misato Oi. 2015. Role of the right inferior frontal gyrus in turn-based cooperation and competition: a near-infrared
spectroscopy study. Brain and cognition 99 (2015), 17–23.
[98] Duri Long, Mikhail Jacob, Nicholas Davis, and Brian Magerko. 2017. Designing for socially interactive systems. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition. 39–50.
[99] Alex Rodriguez Lopez, Antonio Pedro Oliveira, and Amílcar Cardoso. 2010. Real-Time Emotion-Driven Music Engine.. In ICCC. 150–154. [100] Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J Cai. 2020. Novice-AI music co-creation via AI-steering tools for deep
generative models. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[101] Todd I Lubart. 2001. Models of the creative process: Past, present and future. Creativity research journal 13, 3-4 (2001), 295–308. [102] Pedro Lucas and Carlos Martinho. 2017. Stay Awhile and Listen to 3Buddy, a Co-creative Level Design Support Tool.. In ICCC. 205–212. [103] Penousal Machado and Amílcar Cardoso. 2000. NEvAr–the assessment of an evolutionary art tool. In Proceedings of the AISB00 Symposium on
Creative & Cultural Aspects and Applications of AI & Cognitive Science, Birmingham, UK, Vol. 456.
[104] Tiago Machado, Ivan Bravi, Zhu Wang, Andy Nealen, and Julian Togelius. 2016. Shopping for Game Mechanics. (2016). [105] Brian Magerko, Christopher DeLeon, and Peter Dohogne. 2011. Digital improvisational theatre: party quirks. In International Workshop on Intelligent
Virtual Agents. Springer, 42–47.
[106] Mary Lou Maher. 2012. Computational and collective creativity: Who’s being creative?. In ICCC. Citeseer, 67–71. [107] Lena Mamykina, Linda Candy, and Ernest Edmonds. 2002. Collaborative creativity. Commun. ACM 45, 10 (2002), 96–99. [108] Charles Martin, Henry Gardner, Ben Swift, and Michael Martin. 2016. Intelligent agents and networked buttons improve free-improvised ensemble
music-making on touch-screens. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2295–2306.
[109] Michael Mateas and Andrew Stern. 2003. Façade: An experiment in building a fully-realized interactive drama. In Game developers conference, Vol. 2. 4–8.
[110] Jon McCormack, Toby Gifford, Patrick Hutchings, Maria Teresa Llano Rodriguez, Matthew Yee-King, and Mark d’Inverno. 2019. In a silent way: Communication between ai and improvising musicians beyond sound. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.
[111] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229. [112] Asako Miura and Misao Hida. 2004. Synergy between diversity and similarity in group-idea generation. Small Group Research 35, 5 (2004), 540–564. [113] Fabio Morreale, Raul Masu, et al. 2017. Renegotiating responsibilities in human-computer ensembles. (2017). [114] Bilge Mutlu, Fumitaka Yamaoka, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2009. Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior. In Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. 69–76.
[115] Santiago Negrete-Yankelevich and Nora Morales Zaragoza. 2014. The apprentice framework: planning and assessing creativity.. In ICCC. 280–283. [116] Mark Nelson, Swen Gaudl, Simon Colton, Edward Powley, Blanca Perez Ferrer, Rob Saunders, Peter Ivey, and Michael Cook. 2017. Fluidic games in
cultural contexts. (2017).
[117] Laurence Nigay. 2004. Design space for multimodal interaction. In Building the Information Society. Springer, 403–408. [118] Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[119] Hugo Gonçalo Oliveira, Raquel Hervás, Alberto Díaz, and Pablo Gervás. 2014. Adapting a Generic Platform for Poetry Generation to Produce Spanish Poems.. In ICCC. 63–71.
[120] Brian O’Neill and Mark Riedl. 2011. Simulating the Everyday Creativity of Readers.. In ICCC. 153–158. [121] Hiroyuki Osone, Jun-Li Lu, and Yoichi Ochiai. 2021. BunCho: AI Supported Story Co-Creation via Unsupervised Multitask Learning to Increase
Writers’ Creativity in Japanese. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–10.
[122] Francois Pachet. 2003. The continuator: Musical interaction with style. Journal of New Music Research 32, 3 (2003), 333–341.
26
26
Rezwana and Maher
[123] Jéssica Parente, Tiago Martins, Joao Bicker, and Penousal Machado. 2020. Which type is your type?. In ICCC. 476–483. [124] Victor M Ruiz Penichet, Ismael Marin, Jose A Gallud, María Dolores Lozano, and Ricardo Tesoriero. 2007. A classification method for CSCW
systems. Electronic Notes in Theoretical Computer Science 168 (2007), 237–247.
[125] Ken Perlin and Athomas Goldberg. 1996. Improv: A system for scripting interactive actors in virtual worlds. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques. 205–216.
[126] Florian Pinel and Lav R Varshney. 2014. Computational creativity for culinary recipes. In CHI’14 Extended Abstracts on Human Factors in Computing Systems. 439–442.
[127] David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences 1, 4 (1978), 515–526. [128] Walter Reinhard, Jean Schweitzer, Gerd Volksen, and Michael Weber. 1994. CSCW tools: concepts and architectures. Computer 27, 5 (1994), 28–36. [129] Jeba Rezwanaa, Mary Lou Mahera, and Nicholas Davisa. 2021. Creative PenPal: A Virtual Embodied Conversational AI Agent to Improve User
Engagement and Collaborative Experience in Human-AI Co-Creative Design Ideation. (2021).
[130] Mel Rhodes. 1961. An analysis of creativity. The Phi delta kappan 42, 7 (1961), 305–310. [131] Mark Riedl, Jonathan Rowe, and David K Elson. 2008. Toward intelligent support of authoring machinima media content: story and visualization.
(2008).
[132] Tom Rodden and Gordon Blair. 1991. CSCW and distributed systems: The problem of control. In Proceedings of the Second European Conference on Computer-Supported Cooperative Work ECSCW’91. Springer, 49–64.
[133] Daniel M Russell, Mark J Stefik, Peter Pirolli, and Stuart K Card. 1993. The cost structure of sensemaking. In Proceedings of the INTERACT’93 and CHI’93 conference on Human factors in computing systems. 269–276.
[134] Tony Salvador, Jean Scholtz, and James Larson. 1996. The Denver model for groupware design. ACM SIGCHI Bulletin 28, 1 (1996), 52–58. [135] Ben Samuel, Michael Mateas, and Noah Wardrip-Fruin. 2016. The design of Writing Buddy: a mixed-initiative approach towards computational
story collaboration. In International Conference on Interactive Digital Storytelling. Springer, 388–396.
[136] Richard Savery, Lisa Zahray, and Gil Weinberg. 2020. Shimon the Rapper: A Real-Time System for Human-Robot Interactive Rap Battles. arXiv preprint arXiv:2009.09234 (2020).
[137] R Keith Sawyer and Stacy DeZutter. 2009. Distributed creativity: How collective creations emerge from collaboration. Psychology of aesthetics, creativity, and the arts 3, 2 (2009), 81.
[138] Andreas Scheibenpflug, Johannes Karder, Susanne Schaller, Stefan Wagner, and Michael Affenzeller. 2016. Evolutionary Procedural 2D Map Generation using Novelty Search. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion. 39–40.
[139] Kjeld Schmidt. 2008. Cooperative work and coordinative practices. In Cooperative Work and Coordinative Practices. Springer, 3–27. [140] Oliver Schmitt and Daniel Buschek. 2021. CharacterChat: Supporting the Creation of Fictional Characters through Conversation and Progressive
Manifestation with a Chatbot. arXiv preprint arXiv:2106.12314 (2021).
[141] Noor Shaker, Mohammad Shaker, and Julian Togelius. 2013. Ropossum: An authoring tool for designing, optimizing and solving cut the rope levels. In Ninth Artificial Intelligence and Interactive Digital Entertainment Conference.
[142] Ruben Michaël Smelik, Tim Tutenel, Klaas Jan de Kraker, and Rafael Bidarra. 2010. Interactive creation of virtual worlds using procedural sketching.. In Eurographics (Short papers). 29–32.
[143] Gillian Smith, Jim Whitehead, and Michael Mateas. 2010. Tanagra: A mixed-initiative level design tool. In Proceedings of the Fifth International Conference on the Foundations of Digital Games. 209–216.
[144] Michael R Smith, Ryan S Hintze, and Dan Ventura. 2014. Nehovah: A Neologism Creator Nomen Ipsum.. In ICCC. 173–181. [145] Frank K Sonnenberg. 1991. Strategies for creativity. Journal of Business Strategy (1991). [146] Jannick Kirk Sørensen. 2016. Silent game as Model for Examining Student Online Creativity-Preliminary Results from an Experiment. Think
CROSS. Magdeburg: Change MEDIA 10 (2016).
[147] Jannick Kirk Sørensen. 2017. Exploring Constrained Creative Communication: The Silent Game as Model for Studying Online Collaboration. International Journal of E-Services and Mobile Applications (IJESMA) 9, 4 (2017), 1–23.
[148] Paul A Szerlip, Amy K Hoover, and Kenneth O Stanley. 2012. Maestrogenesis: Computer-assisted musical accompaniment generation. (2012). [149] Kurt Thywissen. 1999. GeNotator: an environment for exploring the application of evolutionary techniques in computer-assisted composition.
Organised Sound 4, 2 (1999), 127–133.
[150] Ha Trinh, Darren Edge, Lazlo Ring, and Timothy Bickmore. 2016. Thinking Outside the Box: Co-planning Scientific Presentations with Virtual Agents. In International Conference on Intelligent Virtual Agents. Springer, 306–316.
[151] Dakuo Wang, Justin D Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, and Alexander Gray. 2019. Human-ai collaboration in data science: Exploring data scientists’ perceptions of automated ai. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–24.
[152] Peter Wegner. 1997. Why interaction is more powerful than algorithms. Commun. ACM 40, 5 (1997), 80–91. [153] Miaomiao Wen, Nancy Baym, Omer Tamuz, Jaime Teevan, Susan T Dumais, and Adam Kalai. 2015. OMG UR Funny! Computer-Aided Humor with
an Application to Chat.. In ICCC. 86–93.
[154] Tom White and Ian Loh. 2017. Generating Animations by Sketching in Conceptual Space.. In ICCC. 261–268. [155] Alena Widows and Harriet Sandilands. 2009. The Poetry Machine. www.thepoetrymachine.net
Designing Creative AI Partners with COFI: A Framework for Modeling Interaction in Human-AI Co-Creative Systems27
[156] Geraint A Wiggins. 2006. A preliminary framework for description, analysis and comparison of creative systems. Knowledge-Based Systems 19, 7 (2006), 449–458.
[157] Blake Williford, Abhay Doke, Michel Pahud, Ken Hinckley, and Tracy Hammond. 2019. DrawMyPhoto: assisting novices in drawing from
photographs. In Proceedings of the 2019 on Creativity and Cognition. 198–209.
[158] Lauren Winston and Brian Magerko. 2017. Turn-taking with improvisational co-creative agents. In Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference.
[159] Jun Xiao, Xuemei Zhang, Phil Cheatle, Yuli Gao, and C Brian Atkins. 2008. Mixed-initiative photo collage authoring. In Proceedings of the 16th ACM international conference on Multimedia. 509–518.
[160] Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-AI interaction is uniquely
difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13.
[161] Georgios N Yannakakis, Antonios Liapis, and Constantine Alexopoulos. 2014. Mixed-initiative co-creativity. (2014). [162] Matthew Yee-King and Mark d’Inverno. 2016. Experience driven design of creative systems. (2016). [163] Wako Yoshida, Ray J Dolan, and Karl J Friston. 2008. Game theory of mind. PLoS computational biology 4, 12 (2008), e1000254. [164] Michael W Young and Oliver Bown. 2010. Clap-along: A negotiation strategy for creative musical interaction with computational systems. In Proceedings of the International Conference on Computational Creativity 2010. Departament of Informatics Engineering University of Coimbra, 215–222.
[165] Chao Zhang, Cheng Yao, Jianhui Liu, Zili Zhou, Weilin Zhang, Lijuan Liu, Fangtian Ying, Yijun Zhao, and Guanyun Wang. 2021. StoryDrawer: A Co-Creative Agent Supporting Children’s Storytelling through Collaborative Drawing. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.
[166] Viktor Zoric and Björn Gambäck. 2018. The Image Artist: Computer Generated Art Based on Musical Input.. In ICCC. 296–303.