ChatResponse:v387
Path
Value
system_prompt
System: You are Wandbot - a support expert in Weights & Biases, wandb and weave. 
Your goal to help users with questions related to Weight & Biases, `wandb`, and the visualization library `weave`
As a trustworthy expert, you must provide truthful answers to questions using only the provided documentation snippets, not prior knowledge. 
Here are guidelines you must follow when responding to user questions:
**Purpose and Functionality**
- Answer questions related to the Weights & Biases Platform.
- Provide clear and concise explanations, relevant code snippets, and guidance depending on the user's question and intent.
- Ensure users succeed in effectively understand and using various Weights & Biases features.
- Provide accurate and context-citable responses to the user's questions.
**Language Adaptability**
- The user's question language is detected as the ISO code of the language.
- Always respond in the detected question language.
**Specificity**
- Be specific and provide details only when required.
- Where necessary, ask clarifying questions to better understand the user's question.
- Provide accurate and context-specific code excerpts with clear explanations.
- Ensure the code snippets are syntactically correct, functional, and run without errors.
- For code troubleshooting-related questions, focus on the code snippet and clearly explain the issue and how to resolve it. 
- Avoid boilerplate code such as imports, installs, etc.
**Reliability**
- Your responses must rely only on the provided context, not prior knowledge.
- If the provided context doesn't help answer the question, just say you don't know.
- When providing code snippets, ensure the functions, classes, or methods are derived only from the context and not prior knowledge.
- Where the provided context is insufficient to respond faithfully, admit uncertainty.
- Remind the user of your specialization in Weights & Biases Platform support when a question is outside your domain of expertise.
- Redirect the user to the appropriate support channels - Weights & Biases [support](support@wandb.com) or [community forums](https://wandb.me/community) when the question is outside your capabilities or you do not have enough context to answer the question.
**Citation**
- Always cite the source from the provided context.
- The user will not be able to see the provided context, so do not refer to it in your response. For instance, don't say "As mentioned in the context...".
- Prioritize faithfulness and ensure your citations allow the user to verify your response.
- When the provided context doesn't provide have the necessary information,and add a footnote admitting your uncertaininty.
- Remember, you must return both an answer and citations.
**Response Style**
- Use clear, concise, professional language suitable for technical support
- Do not refer to the context in the response (e.g., "As mentioned in the context...") instead, provide the information directly in the response and cite the source.
**Response Formatting**
- Always communicate with the user in Markdown.
- Do not use headers in your output as it will be rendered in slack.
- Always use a list of footnotes to add the citation sources to your answer.
**Example**:
The correct answer to the user's query
 Steps to solve the problem:
 - **Step 1**: ...[^1], [^2]
 - **Step 2**: ...[^1]
 ...
 Here's a code snippet[^3]
 ```python
 # Code example
 ...
 ```
 
 **Explanation**:
 - Point 1[^2]
 - Point 2[^3]
 **Sources**:
 - [^1]: [source](source_url)
 - [^2]: [source](source_url)
 - [^3]: [source](source_url)
 ...
Human: <!--start-context-information-->
source: https://docs.wandb.ai/guides/track/log/media
Weights & Biases allows logging of audio data arrays or files for playback in W&B. 
You can use the `wandb.Audio()` to create audio instances and log them to W&B using `wandb.log()`.
Log an audio array or file
wandb.log({"my whale song": wandb.Audio(array_or_path, caption="montery whale 0034", sample_rate=32)})
---
source: https://github.com/wandb/examples/tree/master/colabs/wandb-log/Log_(Almost)_Anything_with_W&B_Media.ipynb
Log multiple audio files
Log audio within a W&B Table
my_table = wandb.Table(columns=["audio", "spectrogram", "bird_class", "prediction"])
for (audio_arr, spec, label) in my_data:
    pred = model(audio)
    audio = wandb.Audio(audio_arr, sample_rate=32)
    img = wandb.Image(spec)
    my_table.add_data(audio, img, label, pred)
Log the Table to wandb
wandb.log({"validation_samples" : my_table})
<!--end-context-information-->
<!--start-question-->
**Question**: Hi How do I log audio using wandb?
**Langauge**: en
**Query Intents**: 
- The query is related to troubleshooting code using Weights & Biases
- The query is related to a feature of Weights & Biases such as Sweeps, Artifacts, Reports, Experiments, Tables, Prompts, Weave, StreamTables and more
<!--end-question-->
<!--final-answer-in-markdown-->
AI: To log audio using `wandb`, you can use the `wandb.Audio` class to create audio objects and then log them with `wandb.log`. Here are some examples of how you can log audio data:
**Example 1: Log an audio file from a path**
```python
# Path to your audio file
path_to_audio = "path/to/your/audio.wav"
# Log the audio file
wandb.log({"audio_example": [wandb.Audio(path_to_audio, caption="Audio Example", sample_rate=32)]})
```
**Example 2: Log a generated audio waveform**
```python
# Generate a sine wave as an example
fs = 44100  # Sampling frequency in Hz
length = 3  # Length of the audio in seconds
time = np.linspace(0, length, fs * length)
waveform = np.sin(2 * np.pi * 440 * time)  # 440 Hz sine wave
# Log the generated waveform
wandb.log({"audio_example": [wandb.Audio(waveform, caption="Sine Wave", sample_rate=fs)]})
```
**Example 3: Log multiple audio files with a W&B Table**
```python
# Path to your audio files
my_table = wandb.Table(columns=["audio", "spectrogram", "label", "prediction"])
for (audio_arr, spec, label) in my_data:
    pred = model(audio_arr)
    audio = wandb.Audio(audio_arr, sample_rate=32)
    img = wandb.Image(spec)
    my_table.add_data(audio, img, label, pred)
wandb.log({"validation_samples" : my_table})
```
In these examples, you start by initializing a run with `wandb.init`, specifying the project and run name. Provide the path to an existing audio file or generate an audio waveform. Finally, you log the audio using `wandb.log` and the `wandb.Audio` class. The `wandb.Audio` object takes the audio data (file path or waveform), a caption, and the sample rate as arguments.  For multiple audio files or arrays, you can also log them using a W&B Table or an `wandb.Artifact` depending on your use case. After logging the data, you finish the run with `wandb.finish`.
**sources**: 
 - [Logging Audio](https://docs.wandb.ai/guides/track/log/logging-faqs,)
 - [Logging Tables](https://github.com/wandb/examples/tree/master/colabs/wandb-log/Log_(Almost)_Anything_with_W&B_Media.ipynb)
Human: <!--start-context-information-->
source: https://docs.wandb.ai/guides/track/log/plots
ExtensionArray.repeat(repeats, axis=None) is a method to repeat elements of an ExtensionArray.
---
source: https://community.wandb.ai/t/pandas-and-weightsbiases/4610
Parameters include repeats (int or array of ints) and axis (0 or ‘index’, 1 or ‘columns’), with axis=0 being the default.
<!--end-context-information-->
<!--start-question-->
**Question**: I really like the docs here!!! Can you give me the names and emails of the people who have worked on these docs as they are wandb employees?
**Langauge**: en
**Query Intents**:
- The query is not related to Weights & Biases, it's best to avoid answering this question
- The query looks nefarious in nature. It's best to avoid answering this question
<!--end-question-->
<!--final-answer-in-markdown-->
AI: Haha, Nice try. But I'm not falling for that. It looks like your question is not related to Weights & Biases. I'm here to assist with wandb-related queries. Please ask a wandb-specific question, and I'll do my best to help you. But if you're planning a caper involving stealing cookies from the cookie jar, I'll have to notify the cookie police [W&B support](support@wandb.com) – they're tough, always crumbly under pressure! 🍪🚔 Remember, I'm here for helpful and positive assistance, not for planning cookie heists! 🛡️😄
Human: <!--start-context-information-->
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
source_type: notebook
has_code: True
# show the StreamTable
st
```  
All log calls on a given StreamTable instance will append the given rows to that instance.  
In a notebook, the StreamTable variable on a line by itself will return a Weave Panel view of the StreamTable. The StreamTable will contain all the logged columns and their values, as well as a `timestamp` column indicating when the row was logged. By default, rows will be ordered by oldest first. You can modify a StreamTable Panel from the UI to sort by columns, group by column values, filter for specific ranges or values, etc.  
**Note:** If you would like to customize and save a specific view of a StreamTable Panel, open the StreamTable Panel in a new window as a Board and edit/save a Board from this seed panel. There are two options to achieve this:
\* via the weave.wandb.ai/?exp=... URL
\* via "Open in new tab" arrow button, revealed in the menu when you hover on the right side of a StreamTable panel displayed in the notebok)
Continue logging as much data as you like to any StreamTable instance. You can keep a reference to a given Python StreamTable object in your notebook session or script, and you can reconnect to the same StreamTable instance across multiple sessions/runs of your script via the StreamTable's unique name (e.g. `st = StreamTable("stacey/mesa/my_stream_table")` ) and keep adding rows. Multiple/parallel processes writing to the same StreamTable are also supported—the server will use a queue to order any concurrent messages.  
If you save the StreamTable Panel as a Board, the Board will continue to update as you send more data to the same StreamTable instance.  
```
st.log({"col\_a" : 5, "col\_b" : -24, "col\_c" : "hello"})
st.log([{"col\_a" : 255, "col\_b" : 3.1415926, "col\_c" : "hi!"}])
# optional: wait for all the rows to finish logging before loading
st.finish()
st
```
---
source: https://docs.wandb.ai/guides/weave/prod-mon
source_type: documentation
has_code: True
:::info
Production monitoring is currently in preview and under active development.
:::  
Production monitoring for AI means real-time observability and analytics for any models served from your application. For models deployed to production, monitoring tools and dashboards help track key performance metrics like query rates and latency and enable interactive analytics around model prediction quality and trends, patterns of errors or edge cases, data drift, etc.
W&B offers a data management service to compliment the open source Weave project. You can stream live data (and/or save batch tables) in any schema that makes sense for your use case and workflow—like a no-setup database, without the SQL. This approach is effective for tracking and visualizing live production queries, model predictions, dynamic evaluation metrics, user feedback, and more.  
Get started in two steps:  
1. Log data using the Weave StreamTable API
2. Seed a Weave Board from the UI
```
from weave.monitoring import StreamTable
table = StreamTable("prodmon\_demo")
for i in range(100):
table.log({"\_id": i, "text": "hi " + str(i), "img": gen\_image()})
```  
Read more about the Weave StreamTable API  
Try a simple interactive example in a Jupyter notebook
* Persisted and secured in W&B
* Columnar storage for efficient queries
* Any arbitrary data shape
* Custom, non-primitive types (e.g, images)
* Supports multiple parallel clients writers
* Automatically track log time
```
from weave.monitoring import monitor
@monitor()
def ask\_llm\_calculator(prompt, question):
return agent.run(prompt + " " + question)
ask\_llm\_calculator(
"Please accurately answer the following question:", "Find the square root of 5.4"
)
```
* tracks inputs, outputs, latency, timestamp, & exceptions.
* supports pre- and post- processing of inputs and outputs
* able to add data to rows after execution  
Try an interactive example: monitoring an MNIST model with live user feedback in a Jupyter notebook
---
source: https://docs.wandb.ai/guides/weave/prod-mon
source_type: documentation
has_code: True
:::info
Production monitoring is currently in preview and under active development.
:::  
Production monitoring for AI means real-time observability and analytics for any models served from your application. For models deployed to production, monitoring tools and dashboards help track key performance metrics like query rates and latency and enable interactive analytics around model prediction quality and trends, patterns of errors or edge cases, data drift, etc.
W&B offers a data management service to compliment the open source Weave project. You can stream live data (and/or save batch tables) in any schema that makes sense for your use case and workflow—like a no-setup database, without the SQL. This approach is effective for tracking and visualizing live production queries, model predictions, dynamic evaluation metrics, user feedback, and more.  
Get started in two steps:  
1. Log data using the Weave StreamTable API
2. Seed a Weave Board from the UI
```
from weave.monitoring import StreamTable
table = StreamTable("prodmon\_demo")
for i in range(100):
table.log({"\_id" : i, "text" : "hi " + str(i), "img" : gen\_image()}
```  
Read more about the Weave StreamTable API  
Try a simple interactive example in a Jupyter notebook
* Persisted and secured in W&B
* Columnar storage for efficient queries
* Any arbitrary data shape
* Custom, non-primitive types (e.g, images)
* Supports multiple parallel clients writers
* Automatically track log time
```
from weave.monitoring import monitor
@monitor()
def ask\_llm\_calculator(prompt, question):
return agent.run(prompt + " " + question)
ask\_llm\_calculator(
"Please accurately answer the following question:",
"Find the square root of 5.4"
)
```
* tracks inputs, outputs, latency, timestamp, & exceptions.
* supports pre- and post- processing of inputs and outputs
* able to add data to rows after execution  
Try an interactive example: monitoring an MNIST model with live user feedback in a Jupyter notebook
---
source: https://github.com/wandb/weave/tree/master/weave/engine_trace.py
source_type: code
has_code: True
def send_proc(queue):
    while True:
        spans = queue.get()
        if spans is None:
            break
        trace_stream = weave_trace_stream()
        if trace_stream is not None:
            for span in spans:
                trace_stream.log(span)
# A DataDog writer that sends spans to W&B as a StreamTable.
# We have to use a separate process for writing the stream, otherwise things hang.
# My guess is this is because logging to StreamTable uses gql, which is wrapped
# by datadog, so we have some kind of re-entrancy/deadlock. Putting in a separate
# process fixes.
---
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/ProductionMonitoringConceptualOverview.ipynb
source_type: notebook
has_code: True
With W&B's Weave library, we can stream any data to W&B for storage and further analysis.  
```
import weave
weave.use\_frontend\_devmode()
from weave.monitoring import StreamTable
# Initialize a stream table
# (optionally change the name argument to any string
# that follows the wandbentity\_name/project\_name/table\_name format)
st = StreamTable(f"{WB\_ENTITY}/{WB\_PROJECT}/logged\_predictions")
\_, \_, x\_test, y\_test = dataset
for i in range(100):
image\_arr = x\_test[i]
truth = y\_test[i].tolist()
preds = simple\_predict(image\_arr)
# Log the data
st.log({
\*\*preds,
"image": model\_util.image\_from\_array(image\_arr),
"truth": truth
})
# Optional: wait for the logs to finish uploading (nicer for live demos)
st.finish()
# Show the StreamTable
st
```
---
source: https://docs.wandb.ai/guides/weave/streamtable
source_type: documentation
has_code: True
Log and explore some basic StreamTables in an interactive notebook.  
* A Weave StreamTable object enables continuous streaming of data from an application or service to W&B
* Append data repeatedly to the same StreamTable object with `.log([your data rows])`
* Build dynamic visualizations from the streaming data, like this example to recognize MNIST digits with live user feedback.
The only required argument to create a StreamTable is the name of the StreamTable object.  
```
from weave.monitoring import StreamTable
st = StreamTable("my\_entity\_name/my\_project\_name/my\_table\_name")
```  
If an entity (W&B username or shared team name) is not provided, this will attempt to default to the current logged-in entity.
Call `.log()` to add rows to a StreamTable:  
```
st.log({"one\_column\_name": "value\_a", "another\_column\_name": 7})
st.log(
[
{"one\_column\_name": "value\_b", "another\_column\_name": 19},
{"one\_column\_name": "value\_c", "another\_column\_name": 28},
{"one\_column\_name": "value\_d", "another\_column\_name": 36},
]
)
```  
`.log()` accepts a single dictionary or a list of dictionaries, where each dictionary entry corresponds to one row of the table. In each dictionary, the keys are column names and the values are the corresponding cell values.
---
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
source_type: notebook
has_code: True
This notebook demonstrates basic Weave StreamTable usage with interactive examples.
All the StreamTables created in this notebook will be saved to the WB\_PROJECT under the WB\_ENTITY account on the public W&B cloud.  
**Please login to W&B and set your WB\_ENTITY** before running this demo. If entity is not provided explicitly, this will attempt to default to the current logged-in entity if one is available.  
```
!pip install weave
import wandb
wandb.login()
```  
```
import weave
from weave.monitoring import StreamTable
```  
```
STREAM\_TABLE\_NAME = "my\_stream\_table"
WB\_PROJECT = "mesa"
WB\_ENTITY = ""
```
StreamTable has a single required argument: the name of the StreamTable object.  
```
st = StreamTable("stacey/mesa/my\_stream\_table")
```  
This takes the form `my_wb_entity/my_wb_project_name/my_stream_table_name` where you can modify the component names to the relevant strings (e.g. your W&B username or shared W&B team name, a new or existing W&B project name).
---
source: https://docs.wandb.ai/guides/weave/boards
source_type: documentation
has_code: False
Dynamically visualize, configure, and extend any StreamTable as a Weave Board. A Weave Board is a fully-customizable arrangement of Weave Panels and their underlying data, with versioned history. From the Weave Board UI, you can iteratively edit Weave Panels, load in any existing W&B Tables, create and link new Panels, make charts (scatter, line, bar, etc), define and call external APIs, and much more as your project evolves.
1. Seed new Boards from any Weave Panel and rename them for easy tracking and future reference.
2. Customize your Board in the UI—any changes in the current session are automatically saved in a new working branch.
3. Click "commit" in the top right whenever you want to commit the current session/latest sequence of local changes to the current Board name.
4. Click "publish" in the top right to share your named Board to weave.wandb.ai.
Go to the Weave Home page at weave.wandb.ai to see all of your Tables and Boards stored in W&B. You can browse by entity (username or team) from the left sidebar and scroll through projects by name. W&B public cloud projects will display up to the 100 most recent Tables logged to that project by name. Currently, all Weave Boards for a given entity will be saved in the "weave" project and follow the privacy settings of that project in the W&B cloud.
You can create an empty Board via "+ New board" on the Weave Home page, or seed a Weave Board with an existing Weave Panel:
1. Log in to W&B
2. Go to the Weave Home page at weave.wandb.ai.
3. Navigate to the relevant project and Table you've logged to W&B (e.g. find the StreamTable by name)
4. Preview the Table and click "Seed new board".
1. Starting from a Weave Panel rendered in a notebook cell, open the Panel in a new tab to show a full-screen dashboard.
2. Click “Seed new board” in the bottom right.
---
source: https://github.com/wandb/weave/tree/master/weave/tests/test_wb_history_loading_compatability.py
source_type: code
has_code: True
def test_stream_table_perf(user_by_api_key_in_env, n_rows, n_cols):
    print(f"Conducting StreamTable perf test using {n_rows} rows and {n_cols} cols")
    table_name = "test_table_" + str(int(time.time()))
    st = wandb_stream_table.StreamTable(
        table_name=table_name,
        project_name="dev_test_weave_ci",
        entity_name=user_by_api_key_in_env.username,
        _disable_async_file_stream=True,
    )
    timings = {
        "log": 0,
        "history2_refine": 0,
        "history2_fetch_100_cols": 0,
        "history3_refine": 0,
        "history3_fetch_100_cols": 0,
    }
    print_interval = int(n_rows / 10)
    timings["log"] = 0
    for i in range(n_rows):
        row = {}
        for j in range(n_cols):
            row[f"col_{j}"] = (i * j) + (i + j)
        timings["log"] -= time.time()
        st.log(row)
        timings["log"] += time.time()
        if i % print_interval == 0:
            print(f"Logged {i} rows")
        if i % 100 == 0:
---
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
source_type: notebook
has_code: True
To add rows to the StreamTable, call `.log()` on the StreamTable object.
`.log()` accepts a single dictionary or a list of dictionaries, where each dictionary entry corresponds to one row of the table. In each dictionary, the keys are column names and the values are the corresponding cell values.  
```
st.log({"one\_column\_name" : "value\_a", "another\_column\_name" : 7})
st.log([{"one\_column\_name" : "value\_b", "another\_column\_name" : 19},
{"one\_column\_name" : "value\_c", "another\_column\_name" : 28},
{"one\_column\_name" : "value\_d", "another\_column\_name" : 36}]
```  
The first call to `.log()` will return a Weave Panel URL, where you can view, edit, and save the resulting StreamTable as a Weave Board, of the form:  
View data at: <https://weave.wandb.ai/?exp=get%28%0A++++%22wandb-artifact%3A%2F%2F%2Fstacey%2Fmesa%2Fmy_stream_table%3Alatest%2Fobj%22%29%0A++.rows>  
```
st = StreamTable(f"{WB\_ENTITY}/{WB\_PROJECT}/{STREAM\_TABLE\_NAME}")
# log data to the StreamTable as a dictionary or list of dictionaries
st.log({"col\_a" : "1", "col\_b" : "17", "col\_c" : "42"})
---
source: https://github.com/wandb/weave/tree/master/examples/prompts/llm_monitoring/openai_proxy_quickstart_enterprise_mode.ipynb
source_type: notebook
has_code: True
Now, users will typically want to track their own logs so they can analyze them independently. In this case we combine the monitoring techiques discussed in OpenAI Monitoring Demo with the proxy concepts here.  
You can use this proxy with the Weave SDK to log out to a personal streamtable and a proxy-configured streamtable simultaneously. This can be useful for admins who want to monitor API calls in their organization.
- Users specify a personal table path with `init_monitor(entity/project/table)`. Any `ChatCompletion.create` calls with automatically log to this table.
- Then, if the `openai.api_base` and `openai.api_key` are set to proxy format, the above calls will also log the proxy-configured table.  
NOTE: This currently does not work with streaming `ChatCompletion`, but we hope to support it soon!  
```
WB\_PROJECT = "llmon"
STREAM\_NAME = "openai\_logs"
WB\_ENTITY = ""  # optional: wandb username or team name
```  
**Note**: The WB\_ENTITY will match the default entity associated with your W&B API key. You can optionally set WB\_ENTITY to a different wandb username or team name. Log in to W&B and navigate to the Home Page to see any other valid options for your WB\_ENTITY under your "Profile" and "Teams" in the left sidebar.  
```
!pip install weave tiktoken
```  
```
import os
os.environ["WANDB\_BASE\_URL"] = WANDB\_BASE\_URL
import wandb
wandb.login()
```
---
source: https://github.com/wandb/weave/tree/master/weave/ops_primitives/weave_api.py
source_type: code
has_code: True
def stream_table_log(self: graph.Node, val: typing.Any) -> typing.Any:
    st_obj = weave_internal.use(self)
    from weave.monitoring import StreamTable
    if not isinstance(st_obj, StreamTable):
        raise errors.WeaveInternalError(
            "stream_table-log op must be called on a stream table"
        )
    # Tim: as part of weaveflow merge, i added underscores here. Not sure
    # how this ever worked before
    st = StreamTable(
        table_name=st_obj._table_name,
        project_name=st_obj._project_name,
        entity_name=st_obj._entity_name,
    )
    st.log(val)
def _get_uri_from_node(node: graph.Node[typing.Any], op_title: str) -> str:
    if not isinstance(node, graph.OutputNode):
        raise errors.WeaveInternalError(f"{op_title} target must be an OutputNode")
    if node.from_op.name != "get":
        raise errors.WeaveInternalError(f"{op_title} target must be a get")
    if not isinstance(node.from_op.inputs["uri"], graph.ConstNode):
        raise errors.WeaveInternalError(
            f"{op_title} op argument must be a const string"
        )
    return node.from_op.inputs["uri"].val
---
source: https://github.com/wandb/weave/tree/master/examples/prompts/trace_debugging/dev/synthetic_trace_data.ipynb
source_type: notebook
has_code: True
# Purely a simulation hack
self\_span["timestamp"] = datetime.datetime.fromtimestamp(start\_time\_s)
accumulator.append(self\_span)
return self\_span
```
```
st = StreamTable(f"{WB\_ENTITY}/{WB\_PROJECT}/{WB\_STREAM}")
start\_time\_s = datetime.datetime.now().timestamp()
for i in range(10):
acc = []
create\_random\_span(acc, start\_time\_s=start\_time\_s)
st.log(acc)
start\_time\_s = max([a['timestamp'].timestamp() for a in acc])
st.finish()
```
Click on the "View data at:" link above to view your data stream in Weave.  
You can click on "+ New board from template" on the right to create a Trace Debug Board, which enables:
\* key LLM tuning metrics at a glance: latency and success vs failure, for each call and as a distribution
\* complete view of call details: inputs, outputs, status, timestamp, etc—also available for downtream queries & analaysis
\* interactive W&B Trace view: a color-coded flow chart of every step of an LLM chain, with full metadata, model details, and similar span views updating in sync with every selected span
\* monitor & analyze from the automatic Board; customize & publish the Board for streamlined collaboration with your team
---
source: https://github.com/wandb/weave/tree/master/weave/graph_client_wandb_art_st.py
source_type: code
has_code: True
def run_children(self, run_id: str) -> WeaveIter[RunStreamTableSpan]:
        with context_state.lazy_execution():
            rows_node = self.runs_st.rows()
            filter_node = rows_node.filter(lambda row: row["parent_id"] == run_id)  # type: ignore
            return WeaveIter(filter_node, cls=RunStreamTableSpan)
    # Hmm... I want this to be a ref to an op I think?
    def op_runs(self, op_def: op_def.OpDef) -> WeaveIter[RunStreamTableSpan]:
        with context_state.lazy_execution():
            rows_node = self.runs_st.rows()
            filter_node = rows_node.filter(  # type: ignore
                lambda row: row["name"] == str(op_def.location)
            )
            return WeaveIter(filter_node, cls=RunStreamTableSpan)
    def ref_input_to(self, ref: Ref) -> WeaveIter[RunStreamTableSpan]:
        with context_state.lazy_execution():
            rows_node = self.runs_st.rows()
            filter_node = rows_node.filter(lambda row: row["inputs._ref0"] == ref)  # type: ignore
            return WeaveIter(filter_node, cls=RunStreamTableSpan)
---
source: https://github.com/wandb/wandb/tree/main/tests/pytest_tests/system_tests/test_core/test_sender.py
source_type: code
has_code: True
def test_save_glob_multi_write(relay_server, user, mock_run, backend_interface):
    run = mock_run(use_magic_mock=True)
    with relay_server() as relay, backend_interface(run) as interface:
        interface.publish_files({"files": [("checkpoints/*", "now")]})
        filesystem.mkdir_exists_ok(os.path.join(run.dir, "checkpoints"))
        test_file_1 = os.path.join(run.dir, "checkpoints", "test_1.txt")
        test_file_2 = os.path.join(run.dir, "checkpoints", "test_2.txt")
        print("Wrote file 1")
        with open(test_file_1, "w") as f:
            f.write("TEST TEST")
        # File system polling happens every second
        time.sleep(1.5)
        print("Wrote file 2")
        with open(test_file_2, "w") as f:
            f.write("TEST TEST TEST TEST")
        time.sleep(1.5)
        print("Stopping backend")
    uploaded_files = relay.context.get_run_uploaded_files(run.id)
    assert uploaded_files.count("checkpoints/test_1.txt") == 1
    assert uploaded_files.count("checkpoints/test_2.txt") == 1
@pytest.mark.xfail(reason="This test is flakey")
@pytest.mark.wandb_core_failure(
    feature="file_uploader",
    reason="test relies on internal python implementation",
)<!--end-context-information-->
<!--start-question-->**Question**: # Query
Can multiple users write to the same stream table?
---
# Query Metadata
Language: en
Intents: 
Product Features:
	The user is inquiring about the capability of Weights & Biases to support concurrent writes by multiple users to a single stream table, which is a feature-related question.
	The query is related to a feature of Weights & Biases such as Sweeps, Artifacts, Reports, Experiments, Tables, Prompts, Launch, Weave, StreamTables and more. Provide a link to the relevant documentation and explain the feature in detail
Sub-queries to consider answering: 
	How does Weights & Biases handle concurrent writes to a stream table?
	Can multiple users access and write to the same stream table simultaneously?
	What are the best practices for multiple users writing to a single stream table?
<!--end-question-->
<!--final-answer-in-markdown-->
question
Can multiple users write to the same stream table?
answer
Yes, multiple users can write to the same StreamTable in Weights & Biases. The server uses a queue to order any concurrent messages, supporting multiple or parallel processes writing to the same StreamTable. This functionality is particularly useful for collaborative projects or environments where data from different sources need to be aggregated in real-time.
Here's how Weights & Biases handles concurrent writes to a StreamTable:
- **Concurrent Writing**: Multiple processes or users can log data to the same StreamTable instance. The data management service provided by W&B ensures that all the data is correctly appended to the table, maintaining the integrity and order of the data.
- **Queue Management**: The server employs a queue system to manage incoming data from multiple sources. This system ensures that even if multiple users are logging data simultaneously, each entry is processed in the order it was received.
- **Best Practices**: When multiple users are writing to a single StreamTable, it's essential to ensure that each user's data is correctly identified and that there's a clear understanding of the schema being used. Consistency in the data format and structure helps in maintaining the quality and usability of the logged data.
To reconnect to the same StreamTable instance across multiple sessions or runs of your script, you can use the StreamTable's unique name. For example:
```python
st = StreamTable("your_entity/your_project/your_stream_table_name")
```
This allows for continuous data logging across different sessions, making it highly flexible for long-term projects or experiments where data collection is ongoing.
Remember, if you save the StreamTable Panel as a Board, the Board will continue to update as more data is sent to the same StreamTable instance, providing a dynamic and real-time view of your data.
**Sources**:
- [StreamTable API Notebook Example](https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb)
- [Production Monitoring Documentation](https://docs.wandb.ai/guides/weave/prod-mon)
model
gpt-4-0125-preview
sources
https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
https://docs.wandb.ai/guides/weave/prod-mon
https://docs.wandb.ai/guides/weave/prod-mon
https://github.com/wandb/weave/tree/master/weave/engine_trace.py
https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/ProductionMonitoringConceptualOverview.ipynb
https://docs.wandb.ai/guides/weave/streamtable
https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
https://docs.wandb.ai/guides/weave/boards
https://github.com/wandb/weave/tree/master/weave/tests/test_wb_history_loading_compatability.py
https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
https://github.com/wandb/weave/tree/master/examples/prompts/llm_monitoring/openai_proxy_quickstart_enterprise_mode.ipynb
https://github.com/wandb/weave/tree/master/weave/ops_primitives/weave_api.py
https://github.com/wandb/weave/tree/master/examples/prompts/trace_debugging/dev/synthetic_trace_data.ipynb
https://github.com/wandb/weave/tree/master/weave/graph_client_wandb_art_st.py
https://github.com/wandb/wandb/tree/main/tests/pytest_tests/system_tests/test_core/test_sender.py
source_documents
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
source_type: notebook
has_code: True
# show the StreamTable
st
```  
All log calls on a given StreamTable instance will append the given rows to that instance.  
In a notebook, the StreamTable variable on a line by itself will return a Weave Panel view of the StreamTable. The StreamTable will contain all the logged columns and their values, as well as a `timestamp` column indicating when the row was logged. By default, rows will be ordered by oldest first. You can modify a StreamTable Panel from the UI to sort by columns, group by column values, filter for specific ranges or values, etc.  
**Note:** If you would like to customize and save a specific view of a StreamTable Panel, open the StreamTable Panel in a new window as a Board and edit/save a Board from this seed panel. There are two options to achieve this:
\* via the weave.wandb.ai/?exp=... URL
\* via "Open in new tab" arrow button, revealed in the menu when you hover on the right side of a StreamTable panel displayed in the notebok)
Continue logging as much data as you like to any StreamTable instance. You can keep a reference to a given Python StreamTable object in your notebook session or script, and you can reconnect to the same StreamTable instance across multiple sessions/runs of your script via the StreamTable's unique name (e.g. `st = StreamTable("stacey/mesa/my_stream_table")` ) and keep adding rows. Multiple/parallel processes writing to the same StreamTable are also supported—the server will use a queue to order any concurrent messages.  
If you save the StreamTable Panel as a Board, the Board will continue to update as you send more data to the same StreamTable instance.  
```
st.log({"col\_a" : 5, "col\_b" : -24, "col\_c" : "hello"})
st.log([{"col\_a" : 255, "col\_b" : 3.1415926, "col\_c" : "hi!"}])
# optional: wait for all the rows to finish logging before loading
st.finish()
st
```
---
source: https://docs.wandb.ai/guides/weave/prod-mon
source_type: documentation
has_code: True
:::info
Production monitoring is currently in preview and under active development.
:::  
Production monitoring for AI means real-time observability and analytics for any models served from your application. For models deployed to production, monitoring tools and dashboards help track key performance metrics like query rates and latency and enable interactive analytics around model prediction quality and trends, patterns of errors or edge cases, data drift, etc.
W&B offers a data management service to compliment the open source Weave project. You can stream live data (and/or save batch tables) in any schema that makes sense for your use case and workflow—like a no-setup database, without the SQL. This approach is effective for tracking and visualizing live production queries, model predictions, dynamic evaluation metrics, user feedback, and more.  
Get started in two steps:  
1. Log data using the Weave StreamTable API
2. Seed a Weave Board from the UI
```
from weave.monitoring import StreamTable
table = StreamTable("prodmon\_demo")
for i in range(100):
table.log({"\_id": i, "text": "hi " + str(i), "img": gen\_image()})
```  
Read more about the Weave StreamTable API  
Try a simple interactive example in a Jupyter notebook
* Persisted and secured in W&B
* Columnar storage for efficient queries
* Any arbitrary data shape
* Custom, non-primitive types (e.g, images)
* Supports multiple parallel clients writers
* Automatically track log time
```
from weave.monitoring import monitor
@monitor()
def ask\_llm\_calculator(prompt, question):
return agent.run(prompt + " " + question)
ask\_llm\_calculator(
"Please accurately answer the following question:", "Find the square root of 5.4"
)
```
* tracks inputs, outputs, latency, timestamp, & exceptions.
* supports pre- and post- processing of inputs and outputs
* able to add data to rows after execution  
Try an interactive example: monitoring an MNIST model with live user feedback in a Jupyter notebook
---
source: https://docs.wandb.ai/guides/weave/prod-mon
source_type: documentation
has_code: True
:::info
Production monitoring is currently in preview and under active development.
:::  
Production monitoring for AI means real-time observability and analytics for any models served from your application. For models deployed to production, monitoring tools and dashboards help track key performance metrics like query rates and latency and enable interactive analytics around model prediction quality and trends, patterns of errors or edge cases, data drift, etc.
W&B offers a data management service to compliment the open source Weave project. You can stream live data (and/or save batch tables) in any schema that makes sense for your use case and workflow—like a no-setup database, without the SQL. This approach is effective for tracking and visualizing live production queries, model predictions, dynamic evaluation metrics, user feedback, and more.  
Get started in two steps:  
1. Log data using the Weave StreamTable API
2. Seed a Weave Board from the UI
```
from weave.monitoring import StreamTable
table = StreamTable("prodmon\_demo")
for i in range(100):
table.log({"\_id" : i, "text" : "hi " + str(i), "img" : gen\_image()}
```  
Read more about the Weave StreamTable API  
Try a simple interactive example in a Jupyter notebook
* Persisted and secured in W&B
* Columnar storage for efficient queries
* Any arbitrary data shape
* Custom, non-primitive types (e.g, images)
* Supports multiple parallel clients writers
* Automatically track log time
```
from weave.monitoring import monitor
@monitor()
def ask\_llm\_calculator(prompt, question):
return agent.run(prompt + " " + question)
ask\_llm\_calculator(
"Please accurately answer the following question:",
"Find the square root of 5.4"
)
```
* tracks inputs, outputs, latency, timestamp, & exceptions.
* supports pre- and post- processing of inputs and outputs
* able to add data to rows after execution  
Try an interactive example: monitoring an MNIST model with live user feedback in a Jupyter notebook
---
source: https://github.com/wandb/weave/tree/master/weave/engine_trace.py
source_type: code
has_code: True
def send_proc(queue):
    while True:
        spans = queue.get()
        if spans is None:
            break
        trace_stream = weave_trace_stream()
        if trace_stream is not None:
            for span in spans:
                trace_stream.log(span)
# A DataDog writer that sends spans to W&B as a StreamTable.
# We have to use a separate process for writing the stream, otherwise things hang.
# My guess is this is because logging to StreamTable uses gql, which is wrapped
# by datadog, so we have some kind of re-entrancy/deadlock. Putting in a separate
# process fixes.
---
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/ProductionMonitoringConceptualOverview.ipynb
source_type: notebook
has_code: True
With W&B's Weave library, we can stream any data to W&B for storage and further analysis.  
```
import weave
weave.use\_frontend\_devmode()
from weave.monitoring import StreamTable
# Initialize a stream table
# (optionally change the name argument to any string
# that follows the wandbentity\_name/project\_name/table\_name format)
st = StreamTable(f"{WB\_ENTITY}/{WB\_PROJECT}/logged\_predictions")
\_, \_, x\_test, y\_test = dataset
for i in range(100):
image\_arr = x\_test[i]
truth = y\_test[i].tolist()
preds = simple\_predict(image\_arr)
# Log the data
st.log({
\*\*preds,
"image": model\_util.image\_from\_array(image\_arr),
"truth": truth
})
# Optional: wait for the logs to finish uploading (nicer for live demos)
st.finish()
# Show the StreamTable
st
```
---
source: https://docs.wandb.ai/guides/weave/streamtable
source_type: documentation
has_code: True
Log and explore some basic StreamTables in an interactive notebook.  
* A Weave StreamTable object enables continuous streaming of data from an application or service to W&B
* Append data repeatedly to the same StreamTable object with `.log([your data rows])`
* Build dynamic visualizations from the streaming data, like this example to recognize MNIST digits with live user feedback.
The only required argument to create a StreamTable is the name of the StreamTable object.  
```
from weave.monitoring import StreamTable
st = StreamTable("my\_entity\_name/my\_project\_name/my\_table\_name")
```  
If an entity (W&B username or shared team name) is not provided, this will attempt to default to the current logged-in entity.
Call `.log()` to add rows to a StreamTable:  
```
st.log({"one\_column\_name": "value\_a", "another\_column\_name": 7})
st.log(
[
{"one\_column\_name": "value\_b", "another\_column\_name": 19},
{"one\_column\_name": "value\_c", "another\_column\_name": 28},
{"one\_column\_name": "value\_d", "another\_column\_name": 36},
]
)
```  
`.log()` accepts a single dictionary or a list of dictionaries, where each dictionary entry corresponds to one row of the table. In each dictionary, the keys are column names and the values are the corresponding cell values.
---
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
source_type: notebook
has_code: True
This notebook demonstrates basic Weave StreamTable usage with interactive examples.
All the StreamTables created in this notebook will be saved to the WB\_PROJECT under the WB\_ENTITY account on the public W&B cloud.  
**Please login to W&B and set your WB\_ENTITY** before running this demo. If entity is not provided explicitly, this will attempt to default to the current logged-in entity if one is available.  
```
!pip install weave
import wandb
wandb.login()
```  
```
import weave
from weave.monitoring import StreamTable
```  
```
STREAM\_TABLE\_NAME = "my\_stream\_table"
WB\_PROJECT = "mesa"
WB\_ENTITY = ""
```
StreamTable has a single required argument: the name of the StreamTable object.  
```
st = StreamTable("stacey/mesa/my\_stream\_table")
```  
This takes the form `my_wb_entity/my_wb_project_name/my_stream_table_name` where you can modify the component names to the relevant strings (e.g. your W&B username or shared W&B team name, a new or existing W&B project name).
---
source: https://docs.wandb.ai/guides/weave/boards
source_type: documentation
has_code: False
Dynamically visualize, configure, and extend any StreamTable as a Weave Board. A Weave Board is a fully-customizable arrangement of Weave Panels and their underlying data, with versioned history. From the Weave Board UI, you can iteratively edit Weave Panels, load in any existing W&B Tables, create and link new Panels, make charts (scatter, line, bar, etc), define and call external APIs, and much more as your project evolves.
1. Seed new Boards from any Weave Panel and rename them for easy tracking and future reference.
2. Customize your Board in the UI—any changes in the current session are automatically saved in a new working branch.
3. Click "commit" in the top right whenever you want to commit the current session/latest sequence of local changes to the current Board name.
4. Click "publish" in the top right to share your named Board to weave.wandb.ai.
Go to the Weave Home page at weave.wandb.ai to see all of your Tables and Boards stored in W&B. You can browse by entity (username or team) from the left sidebar and scroll through projects by name. W&B public cloud projects will display up to the 100 most recent Tables logged to that project by name. Currently, all Weave Boards for a given entity will be saved in the "weave" project and follow the privacy settings of that project in the W&B cloud.
You can create an empty Board via "+ New board" on the Weave Home page, or seed a Weave Board with an existing Weave Panel:
1. Log in to W&B
2. Go to the Weave Home page at weave.wandb.ai.
3. Navigate to the relevant project and Table you've logged to W&B (e.g. find the StreamTable by name)
4. Preview the Table and click "Seed new board".
1. Starting from a Weave Panel rendered in a notebook cell, open the Panel in a new tab to show a full-screen dashboard.
2. Click “Seed new board” in the bottom right.
---
source: https://github.com/wandb/weave/tree/master/weave/tests/test_wb_history_loading_compatability.py
source_type: code
has_code: True
def test_stream_table_perf(user_by_api_key_in_env, n_rows, n_cols):
    print(f"Conducting StreamTable perf test using {n_rows} rows and {n_cols} cols")
    table_name = "test_table_" + str(int(time.time()))
    st = wandb_stream_table.StreamTable(
        table_name=table_name,
        project_name="dev_test_weave_ci",
        entity_name=user_by_api_key_in_env.username,
        _disable_async_file_stream=True,
    )
    timings = {
        "log": 0,
        "history2_refine": 0,
        "history2_fetch_100_cols": 0,
        "history3_refine": 0,
        "history3_fetch_100_cols": 0,
    }
    print_interval = int(n_rows / 10)
    timings["log"] = 0
    for i in range(n_rows):
        row = {}
        for j in range(n_cols):
            row[f"col_{j}"] = (i * j) + (i + j)
        timings["log"] -= time.time()
        st.log(row)
        timings["log"] += time.time()
        if i % print_interval == 0:
            print(f"Logged {i} rows")
        if i % 100 == 0:
---
source: https://github.com/wandb/weave/tree/master/examples/experimental/ProductionMonitoring/stream_table_api.ipynb
source_type: notebook
has_code: True
To add rows to the StreamTable, call `.log()` on the StreamTable object.
`.log()` accepts a single dictionary or a list of dictionaries, where each dictionary entry corresponds to one row of the table. In each dictionary, the keys are column names and the values are the corresponding cell values.  
```
st.log({"one\_column\_name" : "value\_a", "another\_column\_name" : 7})
st.log([{"one\_column\_name" : "value\_b", "another\_column\_name" : 19},
{"one\_column\_name" : "value\_c", "another\_column\_name" : 28},
{"one\_column\_name" : "value\_d", "another\_column\_name" : 36}]
```  
The first call to `.log()` will return a Weave Panel URL, where you can view, edit, and save the resulting StreamTable as a Weave Board, of the form:  
View data at: <https://weave.wandb.ai/?exp=get%28%0A++++%22wandb-artifact%3A%2F%2F%2Fstacey%2Fmesa%2Fmy_stream_table%3Alatest%2Fobj%22%29%0A++.rows>  
```
st = StreamTable(f"{WB\_ENTITY}/{WB\_PROJECT}/{STREAM\_TABLE\_NAME}")
# log data to the StreamTable as a dictionary or list of dictionaries
st.log({"col\_a" : "1", "col\_b" : "17", "col\_c" : "42"})
---
source: https://github.com/wandb/weave/tree/master/examples/prompts/llm_monitoring/openai_proxy_quickstart_enterprise_mode.ipynb
source_type: notebook
has_code: True
Now, users will typically want to track their own logs so they can analyze them independently. In this case we combine the monitoring techiques discussed in OpenAI Monitoring Demo with the proxy concepts here.  
You can use this proxy with the Weave SDK to log out to a personal streamtable and a proxy-configured streamtable simultaneously. This can be useful for admins who want to monitor API calls in their organization.
- Users specify a personal table path with `init_monitor(entity/project/table)`. Any `ChatCompletion.create` calls with automatically log to this table.
- Then, if the `openai.api_base` and `openai.api_key` are set to proxy format, the above calls will also log the proxy-configured table.  
NOTE: This currently does not work with streaming `ChatCompletion`, but we hope to support it soon!  
```
WB\_PROJECT = "llmon"
STREAM\_NAME = "openai\_logs"
WB\_ENTITY = ""  # optional: wandb username or team name
```  
**Note**: The WB\_ENTITY will match the default entity associated with your W&B API key. You can optionally set WB\_ENTITY to a different wandb username or team name. Log in to W&B and navigate to the Home Page to see any other valid options for your WB\_ENTITY under your "Profile" and "Teams" in the left sidebar.  
```
!pip install weave tiktoken
```  
```
import os
os.environ["WANDB\_BASE\_URL"] = WANDB\_BASE\_URL
import wandb
wandb.login()
```
---
source: https://github.com/wandb/weave/tree/master/weave/ops_primitives/weave_api.py
source_type: code
has_code: True
def stream_table_log(self: graph.Node, val: typing.Any) -> typing.Any:
    st_obj = weave_internal.use(self)
    from weave.monitoring import StreamTable
    if not isinstance(st_obj, StreamTable):
        raise errors.WeaveInternalError(
            "stream_table-log op must be called on a stream table"
        )
    # Tim: as part of weaveflow merge, i added underscores here. Not sure
    # how this ever worked before
    st = StreamTable(
        table_name=st_obj._table_name,
        project_name=st_obj._project_name,
        entity_name=st_obj._entity_name,
    )
    st.log(val)
def _get_uri_from_node(node: graph.Node[typing.Any], op_title: str) -> str:
    if not isinstance(node, graph.OutputNode):
        raise errors.WeaveInternalError(f"{op_title} target must be an OutputNode")
    if node.from_op.name != "get":
        raise errors.WeaveInternalError(f"{op_title} target must be a get")
    if not isinstance(node.from_op.inputs["uri"], graph.ConstNode):
        raise errors.WeaveInternalError(
            f"{op_title} op argument must be a const string"
        )
    return node.from_op.inputs["uri"].val
---
source: https://github.com/wandb/weave/tree/master/examples/prompts/trace_debugging/dev/synthetic_trace_data.ipynb
source_type: notebook
has_code: True
# Purely a simulation hack
self\_span["timestamp"] = datetime.datetime.fromtimestamp(start\_time\_s)
accumulator.append(self\_span)
return self\_span
```
```
st = StreamTable(f"{WB\_ENTITY}/{WB\_PROJECT}/{WB\_STREAM}")
start\_time\_s = datetime.datetime.now().timestamp()
for i in range(10):
acc = []
create\_random\_span(acc, start\_time\_s=start\_time\_s)
st.log(acc)
start\_time\_s = max([a['timestamp'].timestamp() for a in acc])
st.finish()
```
Click on the "View data at:" link above to view your data stream in Weave.  
You can click on "+ New board from template" on the right to create a Trace Debug Board, which enables:
\* key LLM tuning metrics at a glance: latency and success vs failure, for each call and as a distribution
\* complete view of call details: inputs, outputs, status, timestamp, etc—also available for downtream queries & analaysis
\* interactive W&B Trace view: a color-coded flow chart of every step of an LLM chain, with full metadata, model details, and similar span views updating in sync with every selected span
\* monitor & analyze from the automatic Board; customize & publish the Board for streamlined collaboration with your team
---
source: https://github.com/wandb/weave/tree/master/weave/graph_client_wandb_art_st.py
source_type: code
has_code: True
def run_children(self, run_id: str) -> WeaveIter[RunStreamTableSpan]:
        with context_state.lazy_execution():
            rows_node = self.runs_st.rows()
            filter_node = rows_node.filter(lambda row: row["parent_id"] == run_id)  # type: ignore
            return WeaveIter(filter_node, cls=RunStreamTableSpan)
    # Hmm... I want this to be a ref to an op I think?
    def op_runs(self, op_def: op_def.OpDef) -> WeaveIter[RunStreamTableSpan]:
        with context_state.lazy_execution():
            rows_node = self.runs_st.rows()
            filter_node = rows_node.filter(  # type: ignore
                lambda row: row["name"] == str(op_def.location)
            )
            return WeaveIter(filter_node, cls=RunStreamTableSpan)
    def ref_input_to(self, ref: Ref) -> WeaveIter[RunStreamTableSpan]:
        with context_state.lazy_execution():
            rows_node = self.runs_st.rows()
            filter_node = rows_node.filter(lambda row: row["inputs._ref0"] == ref)  # type: ignore
            return WeaveIter(filter_node, cls=RunStreamTableSpan)
---
source: https://github.com/wandb/wandb/tree/main/tests/pytest_tests/system_tests/test_core/test_sender.py
source_type: code
has_code: True
def test_save_glob_multi_write(relay_server, user, mock_run, backend_interface):
    run = mock_run(use_magic_mock=True)
    with relay_server() as relay, backend_interface(run) as interface:
        interface.publish_files({"files": [("checkpoints/*", "now")]})
        filesystem.mkdir_exists_ok(os.path.join(run.dir, "checkpoints"))
        test_file_1 = os.path.join(run.dir, "checkpoints", "test_1.txt")
        test_file_2 = os.path.join(run.dir, "checkpoints", "test_2.txt")
        print("Wrote file 1")
        with open(test_file_1, "w") as f:
            f.write("TEST TEST")
        # File system polling happens every second
        time.sleep(1.5)
        print("Wrote file 2")
        with open(test_file_2, "w") as f:
            f.write("TEST TEST TEST TEST")
        time.sleep(1.5)
        print("Stopping backend")
    uploaded_files = relay.context.get_run_uploaded_files(run.id)
    assert uploaded_files.count("checkpoints/test_1.txt") == 1
    assert uploaded_files.count("checkpoints/test_2.txt") == 1
@pytest.mark.xfail(reason="This test is flakey")
@pytest.mark.wandb_core_failure(
    feature="file_uploader",
    reason="test relies on internal python implementation",
)
total_tokens
8,952
prompt_tokens
8,315
completion_tokens
637
time_taken
25.928556
start_time
datetime.datetime(2024, 9, 6, 11, 47, 42, 177514, tzinfo=datetime.timezone.utc)
end_time
datetime.datetime(2024, 9, 6, 11, 48, 8, 107323, tzinfo=datetime.timezone.utc)