[SD-4] MVPR Architecture: Science Review DAO #4

Open
opened 2022-07-28 15:34:12 -05:00 by laddhoffman · 8 comments
laddhoffman commented 2022-07-28 15:34:12 -05:00 (Migrated from gitlab.com)

SD-3 has been identified as a distinct deliverable: a specification of the interfaces available to individual implementations, of a dynamic self-governing system.

This proposal therefore is separate and is specific to the architecture of a DAO which we hope to build for the specific use case of scientific publishing.

Miro

[SD-3](#3) has been identified as a distinct deliverable: a specification of the interfaces available to individual implementations, of a dynamic self-governing system. This proposal therefore is separate and is specific to the architecture of a DAO which we hope to build for the specific use case of scientific publishing. [Miro](https://miro.com/app/board/uXjVOeV9hxk=/)
laddhoffman commented 2022-07-28 15:58:03 -05:00 (Migrated from gitlab.com)

Pattern exists of using configuration files that may be optionally local to subdirectories. We could support using specific file naming conventions for optional per-subdirectory elements. And/or we can support defining a centralized index.

Each subdirectory would then be able to contain a segment of a processing pipeline, where its configuration would specify the source data, and some set of operations on that data. It should also include some explanations of the choices made in the work. This can and should include specific citations, referenced in context.

We want any figures produced for the work to include instructions that can be followed to reproduce the image. Best case is that this can be done programmatically. To that end we would be well-served to define a convenient mechanism for expressing these image-producing operations. Again there can be options. Some kind of per-directory config spec that supports a single script enumerating multiple image results. Maybe use routing paradigm and provide an optional default mode of one-script-per-image; or a more flexible mode where you can provide a routing expression for image references. Separately there can be an index of scripts that produce images. Sounds like we want a format for referencing images.

A reviewer would expect the following:

  • Able to reproduce the numerical results described in the paper
  • Able to reproduce figures from the data

A reviewer does not necessarily need to agree with the conclusions drawn from the work and expressed by the author.

How will we deal with this challenge? One mechanism is that we can support an additional layer of review: Review-of-the-review; a.k.a. Comments.

It makes sense for peers to have an opportunity to interject comments among one another's work. It also presents challenges: how can we moderate such comments?

Our approach will be to create a framework within which communities can work to establish their own agreements about the functioning of their own organization.

Orchestrating such an open-ended framework presents a "wicked problem," meaning intractably complex. With this in mind we consider that our system can do no better than to apply a diminished reflection of the same mechanisms that govern the weightier posts.

Thus, a comment should be able to function as itself an artifact; and it should be able to function as a review of another comment; or for that matter a post or review.

Pattern exists of using configuration files that may be optionally local to subdirectories. We could support using specific file naming conventions for optional per-subdirectory elements. And/or we can support defining a centralized index. Each subdirectory would then be able to contain a segment of a processing pipeline, where its configuration would specify the source data, and some set of operations on that data. It should also include some explanations of the choices made in the work. This can and should include specific citations, referenced in context. We want any figures produced for the work to include instructions that can be followed to reproduce the image. Best case is that this can be done programmatically. To that end we would be well-served to define a convenient mechanism for expressing these image-producing operations. Again there can be options. Some kind of per-directory config spec that supports a single script enumerating multiple image results. Maybe use routing paradigm and provide an optional default mode of one-script-per-image; or a more flexible mode where you can provide a routing expression for image references. Separately there can be an index of scripts that produce images. Sounds like we want a format for referencing images. A reviewer would expect the following: * Able to reproduce the numerical results described in the paper * Able to reproduce figures from the data A reviewer does not necessarily need to agree with the conclusions drawn from the work and expressed by the author. How will we deal with this challenge? One mechanism is that we can support an additional layer of review: Review-of-the-review; a.k.a. Comments. It makes sense for peers to have an opportunity to interject comments among one another's work. It also presents challenges: how can we moderate such comments? Our approach will be to create a framework within which communities can work to establish their own agreements about the functioning of their own organization. Orchestrating such an open-ended framework presents a "wicked problem," meaning intractably complex. With this in mind we consider that our system can do no better than to apply a diminished reflection of the same mechanisms that govern the weightier posts. Thus, a comment should be able to function as itself an artifact; and it should be able to function as a review of another comment; or for that matter a post or review.
laddhoffman commented 2022-07-28 16:05:27 -05:00 (Migrated from gitlab.com)

changed title from MVPR Architecture: Science Review DAO to {+[SD-4] +}MVPR Architecture: Science Review DAO

changed title from **MVPR Architecture: Science Review DAO** to **{+[SD-4] +}MVPR Architecture: Science Review DAO**
laddhoffman commented 2022-07-28 16:05:57 -05:00 (Migrated from gitlab.com)

changed the description

changed the description
laddhoffman commented 2022-07-28 16:06:21 -05:00 (Migrated from gitlab.com)

changed the description

changed the description
daedelan commented 2022-08-03 14:10:04 -05:00 (Migrated from gitlab.com)

Regarding....

A reviewer would expect the following:

  • Able to reproduce the numerical results described in the paper
  • Able to reproduce figures from the data

https://www.youtube.com/watch?v=01RUsvDZQQY&t=11010s

Chris's DeSci Labs Talk (2:58:10). He gets to the technical bits/product/Demo at 3:08:20.

Basically, the idea is to post code and data and be able to computationally replicate the figures in a pdf using his teams product. They are working intimately with Weavechain.

This fits nicely with an aspect of our computational replication plan. However, replication shouldn't be thought of as just some boring checking process (though at times it definitely should be) since it is impossible to recreate the exact conditions as the original study.

Say there is a computational chemist that that is reading a paper by an experimental and theoretical chemist. The theory seems right, but he/she/they can beef up the computational side as it also overlaps with their own expertise. After updating the computational elements, layered with his/her/they's interpretation can add more nuance to the conversation.

There have been a few cases where Nobel works had failed their replication studies only for those replications to add to the robustness of the theory in the long-term. The reputation system we are building incentivizes replication of different types based on the attention it is getting.

image

Regarding.... > A reviewer would expect the following: > > * Able to reproduce the numerical results described in the paper > * Able to reproduce figures from the data https://www.youtube.com/watch?v=01RUsvDZQQY&t=11010s Chris's DeSci Labs Talk (2:58:10). He gets to the technical bits/product/Demo at 3:08:20. Basically, the idea is to post code and data and be able to computationally replicate the figures in a pdf using his teams product. They are working intimately with Weavechain. This fits nicely with an aspect of our computational replication plan. However, replication shouldn't be thought of as just some boring checking process (though at times it definitely should be) since it is impossible to recreate the exact conditions as the original study. Say there is a computational chemist that that is reading a paper by an experimental and theoretical chemist. The theory seems right, but he/she/they can beef up the computational side as it also overlaps with their own expertise. After updating the computational elements, layered with his/her/they's interpretation can add more nuance to the conversation. There have been a few cases where Nobel works had failed their replication studies only for those replications to add to the robustness of the theory in the long-term. The reputation system we are building incentivizes replication of different types based on the attention it is getting. ![image](/uploads/6059edaee9426e2801bc6cbaef6b4626/image.png)
daedelan commented 2022-08-03 17:03:58 -05:00 (Migrated from gitlab.com)
[Types of Peer-Review | Wiley](https://authorservices.wiley.com/Reviewers/journal-reviewers/what-is-peer-review/types-of-peer-review.html) ![image](/uploads/d8c3c8a798e9cb537a2c78123c462338/image.png)
daedelan commented 2022-08-03 17:08:33 -05:00 (Migrated from gitlab.com)

mentioned in issue #3

mentioned in issue #3
daedelan commented 2022-08-17 15:59:30 -05:00 (Migrated from gitlab.com)

changed the description

changed the description
Sign in to join this conversation.
No Label
discussion
draft
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: DGF/dao-governance-framework#4
No description provided.