Skip to content

Commit

Permalink
test(stampy.spec): parameterize the test
Browse files Browse the repository at this point in the history
  • Loading branch information
jrhender committed Jul 4, 2023
1 parent 78cd07a commit 70b465d
Show file tree
Hide file tree
Showing 5 changed files with 75 additions and 154 deletions.
89 changes: 0 additions & 89 deletions app/mocks/question-data/question-2400.ts

This file was deleted.

1 change: 1 addition & 0 deletions app/mocks/question-data/question-8486.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"items":[{"id":"i-8899afafb469e7a7e3691f2b506fec68b4567eb11d991ce0f21e99ad1527f4ee","type":"row","href":"https://coda.io/apis/v1/docs/fau7sl2hmG/tables/grid-sync-1059-File/rows/i-8899afafb469e7a7e3691f2b506fec68b4567eb11d991ce0f21e99ad1527f4ee","name":"What is AI safety?","index":349,"createdAt":"2023-01-14T14:46:14.123Z","updatedAt":"2023-06-17T00:07:28.126Z","browserLink":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File/_rui-8899afafb469e7a7e3691f2b506fec68b4567eb11d991ce0f21e99ad1527f4ee","values":{"File":"What is AI safety?","Synced":false,"Sync account":{"@context":"http://schema.org/","@type":"StructuredValue","additionalType":"row","name":"stampysaisafetyinfo@gmail.com","url":"https://coda.io/d/_dfau7sl2hmG#_tuGlobal-External-Connections/_rui-1fffa4c7-80ba-4bd0-804e-e432be8d2052","tableId":"Global-External-Connections","rowId":"i-1fffa4c7-80ba-4bd0-804e-e432be8d2052","tableUrl":"https://coda.io/d/_dfau7sl2hmG#_tuGlobal-External-Connections"},"Question":"```What is AI safety?```","Link":{"@context":"http://schema.org/","@type":"WebPage","url":"https://docs.google.com/document/d/1zz1c6rRN8Y-CmO0-BGVKI9G6MMg2MUqD247dPcub144/edit?usp=drivesdk"},"Thumbnail":{"@context":"http://schema.org/","@type":"ImageObject","name":"image.jpeg","height":220,"width":170,"url":"https://codahosted.io/docs/fau7sl2hmG/blobs/bl-4_OLuliiY9/28f78d3dc11cb907a9e2d579ff81a56401b61b45973e6b28db154f0ffd7bb7d90a68bf483c431c84f1093036e4d635b78c7a12dd7a7cad4756446dbd4503e4d1af9013f31859a31b8bdae07984e864d1f584394864e13105331e59981ef61d91e3fbbb85","status":"live"},"Doc Created":"2023-01-14T14:48:30.226+01:00","Related Answers DO NOT EDIT":[{"@context":"http://schema.org/","@type":"StructuredValue","additionalType":"row","name":"What is the difference between AI safety, AI alignment, AI control, friendly AI, AI ethics, AI existential safety and AGI safety?","url":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File/_rui-53897773ddbc2889ee036970bb572cffaef2ead71d29cfccecdcac6c51a181a2","tableId":"grid-sync-1059-File","rowId":"i-53897773ddbc2889ee036970bb572cffaef2ead71d29cfccecdcac6c51a181a2","tableUrl":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File"},{"@context":"http://schema.org/","@type":"StructuredValue","additionalType":"row","name":"What approaches are AI alignment organizations working on?","url":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File/_rui-cd5b637d614c18e592dbee9c05adce59dc98163baba9ac36604b736fa76c76ab","tableId":"grid-sync-1059-File","rowId":"i-cd5b637d614c18e592dbee9c05adce59dc98163baba9ac36604b736fa76c76ab","tableUrl":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File"}],"Tags":"","Doc Last Edited":"2023-06-16T22:55:38.462+02:00","Status":{"@context":"http://schema.org/","@type":"StructuredValue","additionalType":"row","name":"Live on site","url":"https://coda.io/d/_dfau7sl2hmG#_tugrid-IWDInbu5n2/_rui-7EfvxV9G0N","tableId":"grid-IWDInbu5n2","rowId":"i-7EfvxV9G0N","tableUrl":"https://coda.io/d/_dfau7sl2hmG#_tugrid-IWDInbu5n2"},"Edit Answer":"**[What is AI safety?](https://docs.google.com/document/d/1zz1c6rRN8Y-CmO0-BGVKI9G6MMg2MUqD247dPcub144/edit?usp=drivesdk)**","Alternate Phrasings":"","UI ID DO NOT EDIT":"```8486```","Source Link":"","aisafety.info Link":"**[What is AI safety?](https://aisafety.info/?state=8486_)**","Source":"```Wiki```","All Phrasings":"```What is AI safety?\n```","Initial Order":"","Related IDs":["```6714```","```6178```"],"Rich Text DO NOT EDIT":"```<iframe src=\"https://www.youtube.com/embed/pYXy-A4siMw\" title=\"Intro to AI Safety, Remastered\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\nIn the [coming decades](https://www.cold-takes.com/most-important-century/), AI systems could be invented that outperform humans on most tasks, including strategy, persuasion, economic productivity, scientific research and development, and AI design. We don&#39;t know how to [align such systems](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/) with the intentions of their users, even when those intentions are good. This could lead to catastrophic outcomes.\n\nThe research field of AI safety was founded to prevent such disasters, and enable humanity to use the enormous potential of advanced AI to solve problems and improve the world. There are many kinds of AI risk, but the kind that this website focuses on, because it seems both plausible and extreme in scope, is [existential risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) from misaligned AI systems [disempowering or killing humanity](https://80000hours.org/problem-profiles/artificial-intelligence/#power-seeking-ai).\n\nExamples of work on AI existential safety are:\n\n- [Agent foundations](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation): Understanding what intelligence and agency are at a fundamental level\n\n- [Prosaic alignment](/?state=89LM): Developing methods like [debate](https://openai.com/research/debate) and [iterated distillation and amplification](/?state=897J) to align more powerful versions of current AI techniques\n\n- [AI policy and governance](https://80000hours.org/articles/ai-policy-guide/): Setting up institutions and mechanisms that cause the major actors to implement good AI safety practices\n\nExamples of work from the broader AI safety field are:\n\n- Getting content recommender systems to not radicalize their users\n\n- Ensuring autonomous cars don’t kill people\n\n- Advocating strict regulations for lethal autonomous weapons\n\nSome kinds of research are useful for addressing both existential risk and smaller-scale bad outcomes:\n\n- [Robustness to distribution shift](https://www.alignmentforum.org/tag/distributional-shifts): making AI systems more able to function reliably outside of the context they were trained in\n\n- [Interpretability](/?state=8241): giving humans insight into the inner workings of AI systems such as neural networks\n\nThis website is a single point of access where people can read summaries and find links to the best information on concepts related to AI existential safety. The goal is to help readers contribute to the effort to ensure that humanity avoids these risks and reaches a wonderful future.\n\n```","Tag Count":0,"Related Answer Count":2,"Rich Text":"```<iframe src=\"https://www.youtube.com/embed/pYXy-A4siMw\" title=\"Intro to AI Safety, Remastered\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\n\nIn the [coming decades](https://www.cold-takes.com/most-important-century/), AI systems could be invented that outperform humans on most tasks, including strategy, persuasion, economic productivity, scientific research and development, and AI design. We don&#39;t know how to [align such systems](https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/) with the intentions of their users, even when those intentions are good. This could lead to catastrophic outcomes.\n\nThe research field of AI safety was founded to prevent such disasters, and enable humanity to use the enormous potential of advanced AI to solve problems and improve the world. There are many kinds of AI risk, but the kind that this website focuses on, because it seems both plausible and extreme in scope, is [existential risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) from misaligned AI systems [disempowering or killing humanity](https://80000hours.org/problem-profiles/artificial-intelligence/#power-seeking-ai).\n\nExamples of work on AI existential safety are:\n\n- [Agent foundations](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation): Understanding what intelligence and agency are at a fundamental level\n\n- [Prosaic alignment](/?state=89LM): Developing methods like [debate](https://openai.com/research/debate) and [iterated distillation and amplification](/?state=897J) to align more powerful versions of current AI techniques\n\n- [AI policy and governance](https://80000hours.org/articles/ai-policy-guide/): Setting up institutions and mechanisms that cause the major actors to implement good AI safety practices\n\nExamples of work from the broader AI safety field are:\n\n- Getting content recommender systems to not radicalize their users\n\n- Ensuring autonomous cars don’t kill people\n\n- Advocating strict regulations for lethal autonomous weapons\n\nSome kinds of research are useful for addressing both existential risk and smaller-scale bad outcomes:\n\n- [Robustness to distribution shift](https://www.alignmentforum.org/tag/distributional-shifts): making AI systems more able to function reliably outside of the context they were trained in\n\n- [Interpretability](/?state=8241): giving humans insight into the inner workings of AI systems such as neural networks\n\nThis website is a single point of access where people can read summaries and find links to the best information on concepts related to AI existential safety. The goal is to help readers contribute to the effort to ensure that humanity avoids these risks and reaches a wonderful future.\n\n```","Stamp Count":0,"Multi Answer":"","Stamped By":"","Priority":2,"Asker":"```Magdalena```","External Source":"","Last Asked On Discord":"","UI ID":"```8486```","Related Answers":[{"@context":"http://schema.org/","@type":"StructuredValue","additionalType":"row","name":"What is the difference between AI safety, AI alignment, AI control, friendly AI, AI ethics, AI existential safety and AGI safety?","url":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File/_rui-53897773ddbc2889ee036970bb572cffaef2ead71d29cfccecdcac6c51a181a2","tableId":"grid-sync-1059-File","rowId":"i-53897773ddbc2889ee036970bb572cffaef2ead71d29cfccecdcac6c51a181a2","tableUrl":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File"},{"@context":"http://schema.org/","@type":"StructuredValue","additionalType":"row","name":"What approaches are AI alignment organizations working on?","url":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File/_rui-cd5b637d614c18e592dbee9c05adce59dc98163baba9ac36604b736fa76c76ab","tableId":"grid-sync-1059-File","rowId":"i-cd5b637d614c18e592dbee9c05adce59dc98163baba9ac36604b736fa76c76ab","tableUrl":"https://coda.io/d/_dfau7sl2hmG#_tugrid-sync-1059-File"}],"Doc Last Ingested":"2023-06-16T23:16:39.168+02:00","Request Count":"","Number of suggestions on answer doc":0,"Total character count of suggestions on answer doc":0,"Helpful":10,"Number of pending comments":16,"Length":2904}}],"href":"https://coda.io/apis/v1/docs/fau7sl2hmG/tables/grid-sync-1059-File/rows?pageToken=eyJsaW1pdCI6MjAwLCJvZmZzZXQiOjAsIm9wVmVyc2lvbiI6OTgyNDgsInF1ZXJ5IjoiYy0yMDNLd1NDMk5fOlwiODQ4NlwiIiwic2NoZW1hVmVyc2lvbiI6MTcyLCJzb3J0QnkiOiJuYXR1cmFsIiwidXNlQ29sdW1uTmFtZXMiOnRydWUsInZhbHVlRm9ybWF0IjoicmljaCJ9"}
Loading

0 comments on commit 70b465d

Please sign in to comment.