diff --git a/tests/cassettes/TestTaskDataset.test_tool_failure.yaml b/tests/cassettes/TestTaskDataset.test_tool_failure.yaml index 5d62549b..0dd51644 100644 --- a/tests/cassettes/TestTaskDataset.test_tool_failure.yaml +++ b/tests/cassettes/TestTaskDataset.test_tool_failure.yaml @@ -53,7 +53,7 @@ interactions: host: - api.openai.com user-agent: - - AsyncOpenAI/Python 1.47.1 + - AsyncOpenAI/Python 1.46.1 x-stainless-arch: - arm64 x-stainless-async: @@ -63,7 +63,7 @@ interactions: x-stainless-os: - MacOS x-stainless-package-version: - - 1.47.1 + - 1.46.1 x-stainless-raw-response: - "true" x-stainless-runtime: @@ -75,20 +75,20 @@ interactions: response: body: string: !!binary | - H4sIAAAAAAAAAwAAAP//dFJNb5tAFLzzK1bvbFcY7Ljm5kZJ61PbSG3SlgitlwdeZ7+6u1hGlv97 - BThArJQDQjPMm9nZdwoIAZ5DQoDtqGfSiOl6vT/6Q0yfjptHVJ8fii+4ueNF9TS7/7GCSaPQ2z0y - /6r6wLQ0Aj3XqqOZReqxmTpbRstZFMdx3BJS5ygaWWn8dK6nURjNp+HHaXhzEe40Z+ggIX8CQgg5 - te8mosrxCAkJJ6+IROdoiZD0PxECVosGAeocd54qD5OBZFp5VE1qVQkxIrzWImNUiMG4e06j76En - KkR2uy7q33Lz6/vt4XH18HV3qH5ui7tv+5FfN7o2baCiUqzvZ8T3eHJlRggoKlutoQZt5pBatrvS - EwLUlpVE5ZvscErhb4W2TiFJ4d5ijpazF/KJenQpTFKQXGU1UptC0paQgqTHMXKGNwbn4L3v51F5 - FovKUXFp9YKf+2sSujRWb91V61Bwxd0us0hde3pwXpvOu/FpHaB6c8NgrJbGZ16/oGoGzlfLbh4M - Cziw0eJCeu2pGPBFFP1PleXoKW/XoN+8LiFX5TAh7GO25wRXO48yK7gq0RrL2x2DwmSLcBFG8Q3b - 5hCcg38AAAD//wMAw2VXXmwDAAA= + H4sIAAAAAAAAA3RS0W6bMBR9z1dY9zlMhAZIeGuqVd3b1k2qmjIhx1zAibEd22iNovz7BKRAo40H + hM7h3HN8fM8zQoDnkBBgFXWs1sK738RP4Tf35/X1+Avd9uHn/kkcV9+Dl6h82MK8VajdHpn7UH1h + qtYCHVeyp5lB6rCduoiDOFgE67uwI2qVo2hlpXbeUnmBHyw9f+X50VVYKc7QQkLeZoQQcu7ebUSZ + 4zskxJ9/IDVaS0uEZPiJEDBKtAhQa7l1VDqYjyRT0qFsU8tGiAnhlBIZo0KMxv1znnyPPVEhsn31 + Umz3X+8Ozf1z9GPRPEbxrio2zxO/fvRJd4GKRrKhnwk/4MmNGSEgad1pNdVoMovUsOpGTwhQUzY1 + Stdmh3MKxwbNKYUkhUeDORrODmRDHdoU5inUXGYnpCaFpCshhZq+T5ELfDK4zP71/XtSnsGisVRc + W73il+GahCq1UTt70zoUXHJbZQap7U4P1inde7c+nQM0n24YtFG1dplTB5TtwOU67ufBuIAjG4RX + 0ilHxYiHQfA/VZajo7xbg2Hz+oRcluMEf4jZnRPsyTqss4LLEo02vNsxKHQWh9EqX0b+eg2zy+wv + AAAA//8DANDCsEhsAwAA headers: CF-Cache-Status: - DYNAMIC CF-RAY: - - 8c7d48a45f6b1574-SJC + - 8c85d42d78cb7e25-SJC Connection: - keep-alive Content-Encoding: @@ -96,14 +96,14 @@ interactions: Content-Type: - application/json Date: - - Mon, 23 Sep 2024 20:28:54 GMT + - Tue, 24 Sep 2024 21:22:16 GMT Server: - cloudflare Set-Cookie: - - __cf_bm=pUVZr5fHrragLrnhgo0fs.mC8TQTmmaOG3UUe48NnDM-1727123334-1.0.1.1-6d4l70w0QlQ4OYuHxkjXkp0yvtQyPUJ5W_tHGrSSoOst3Afh4iA5L5hcqOBDrDWf9wpMbDNHef6Nfw96vqKgXw; - path=/; expires=Mon, 23-Sep-24 20:58:54 GMT; domain=.api.openai.com; HttpOnly; + - __cf_bm=lO6q5yU3msZc0ua1Mj7X_ZpNQQhU9qn5.z3B7DlGyCg-1727212936-1.0.1.1-xkjRCYXLptyEGIc_Up3IXo22QgquT.EnIPD6ui_u32fmt9aieEFu.RY2XFq_WxkSEQ8hFMdqEbvqrJJtlA8CPg; + path=/; expires=Tue, 24-Sep-24 21:52:16 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None - - _cfuvid=anzCkqd8zjHwB2ExNfJr.qYFHBuODWdFupY458Udse0-1727123334295-0.0.1.1-604800000; + - _cfuvid=f7kPIkexTaQh34Rqfqd8We0Z4.XVU5p4DlgDn0DtthM-1727212936024-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None Transfer-Encoding: - chunked @@ -116,25 +116,25 @@ interactions: openai-organization: - future-house-xr4tdh openai-processing-ms: - - "368" + - "448" openai-version: - "2020-10-01" strict-transport-security: - - max-age=15552000; includeSubDomains; preload + - max-age=31536000; includeSubDomains; preload x-ratelimit-limit-requests: - "10000" x-ratelimit-limit-tokens: - "30000000" x-ratelimit-remaining-requests: - - "9998" + - "9999" x-ratelimit-remaining-tokens: - "29999869" x-ratelimit-reset-requests: - - 8ms + - 6ms x-ratelimit-reset-tokens: - 0s x-request-id: - - req_27c9b024bd388ad1ed4eea142b270e26 + - req_bf0b1b13a29ff46c8aead077f0f87ef0 status: code: 200 message: OK @@ -142,47 +142,60 @@ interactions: body: '{"messages": [{"role": "user", "content": "Provide the citation for the following text in MLA Format. Do not write an introductory sentence. If reporting - date accessed, the current year is 2024\n\nA Perspective on Explanations of - Molecular\nPrediction Models\nGeemi P. Wellawatte,\u2020\nHeta A. Gandhi,\u2021\nAditi - Seshadri,\u2021 and\nAndrew\nD. White\u2217,\u2021\n\u2020Department of Chemistry, - University of Rochester, Rochester, NY, 14627\n\u2021Department of Chemical - Engineering, University of Rochester, Rochester, NY, 14627\n\u00b6Vial Health - Technology, Inc., San Francisco, CA 94111\nE-mail: andrew.white@rochester.edu\nAbstract\nChemists - can be skeptical in using deep learning (DL) in decision making, due to\nthe - lack of interpretability in \u201cblack-box\u201d models. Explainable artificial - intelligence\n(XAI) is a branch of AI which addresses this drawback by providing - tools to interpret\nDL models and their predictions. We review the principles - of XAI in the domain of\nchemistry and emerging methods for creating and evaluating - explanations. Then we\nfocus on methods developed by our group and their applications - in predicting solubil-\nity, blood-brain barrier permeability, and the scent - of molecules. We show that XAI\nmethods like chemical counterfactuals and descriptor - explanations can explain DL pre-\ndictions while giving insight into structure-property - relationships. Finally, we discuss\nhow a two-step process of developing a black-box - model and explaining predictions can\nuncover structure-property relationships.\n1\nIntroduction\nDeep - learning (DL) is advancing the boundaries of computational chemistry because - it can\naccurately model non-linear structure-function relationships.1\u20133 - Applications of DL can be\nfound in a broad spectrum spanning from quantum computing4,5 - to drug discovery6\u201310 to\nmaterials design.11,12 According to Kre 13, DL - models can contribute to scientific discovery\nin three \u201cdimensions\u201d - - 1) as a \u2018computational microscope\u2019 to gain insight which are not\nattainable - through experiments 2) as a \u2018resource of inspiration\u2019 to motivate - scientific thinking\n3) as an \u2018agent of understanding\u2019 to uncover - new observations. However, the rationale of\na DL prediction is not always apparent - due to the model architecture consisting a large\nparameter count.14,15 DL models - are thus often termed\u201cblack box\u201d models. We can only\nreason about - the input and output of an DL model, not the underlying cause that leads to\na - specific prediction.\nIt is routine in chemistry now for DL to exceed human - level performance \u2014 humans are\nnot good at predicting solubility from - structure for example161 \u2014 and so understanding how\na model makes predictions - can guide hypotheses. This is in contrast to a topic like finding\na stop sign - in an image, where there is little new to be learned about visual perception\nby - explaining a DL model. However, the black box nature of DL has its own limitations.\nUsers - are more likely to trust and use predictions from a model if they can understand - why\nthe prediction was made.17 Explaining predictions can help developers of - DL models ensure\nthe model is not learning spurious correlations.18,19 Two - infamous examples are, 1)neural\nnetworks that learned to recognize horses by - looking for a photogr\n\nCitation:"}], "model": "gpt-4o-2024-08-06", "stream": - false, "temperature": 0.0}' + date accessed, the current year is 2024\n\n\n\n\n\nBarack Obama - Wikipedia\n