{ "metadata": { }, "nbformat": 4, "nbformat_minor": 5, "cells": [ { "id": "metadata", "cell_type": "markdown", "source": "
\n\n# Scripting Galaxy using the API and BioBlend\n\nby [Nicola Soranzo](https://training.galaxyproject.org/hall-of-fame/nsoranzo/), [Clare Sloggett](https://training.galaxyproject.org/hall-of-fame/claresloggett/), [Nitesh Turaga](https://training.galaxyproject.org/hall-of-fame/nturaga/), [Helena Rasche](https://training.galaxyproject.org/hall-of-fame/hexylena/)\n\nCC-BY licensed content from the [Galaxy Training Network](https://training.galaxyproject.org/)\n\n**Objectives**\n\n- What is a REST API?\n- How to interact with Galaxy programmatically?\n- Why and when should I use BioBlend?\n\n**Objectives**\n\n- Interact with Galaxy via BioBlend.\n\n**Time Estimation: 2h**\n
\n", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-0", "source": "
\n
Agenda
\n

In this tutorial, we will cover:

\n
    \n
  1. Interacting with histories in Galaxy API
  2. \n
\n
\n

Interacting with histories in Galaxy API

\n

We are going to use the requests Python library to communicate via HTTP with the Galaxy server. To start, let’s define the connection parameters.

\n

You need to insert the API key for your Galaxy server in the cell below:

\n
    \n
  1. Open the Galaxy server in another browser tab
  2. \n
  3. Click on “User” on the top menu, then “Preferences”
  4. \n
  5. Click on “Manage API key”
  6. \n
  7. Generate an API key if needed, then copy the alphanumeric string and paste it as the value of the api_key variable below.
  8. \n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-1", "source": [ "import json\n", "from pprint import pprint\n", "from urllib.parse import urljoin\n", "\n", "import requests\n", "\n", "server = 'https://usegalaxy.eu/'\n", "api_key = ''\n", "base_url = urljoin(server, 'api')\n", "base_url" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-2", "source": "

We now make a GET request to retrieve all histories owned by a user:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-3", "source": [ "headers = {\"Content-Type\": \"application/json\", \"x-api-key\": api_key}\n", "r = requests.get(base_url + \"/histories\", headers=headers)\n", "print(r.text)\n", "hists = r.json()\n", "pprint(hists)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-4", "source": "

As you can see, GET requests in Galaxy API return JSON strings, which need to be deserialized into Python data structures. In particular, GETting a resource collection returns a list of dictionaries.

\n

Each dictionary returned when GETting a resource collection gives basic info about a resource, e.g. for a history you have:

\n\n

There is no readily-available filtering capability, but it’s not difficult to filter histories by name:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-5", "source": [ "pprint([_ for _ in hists if _['name'] == 'Unnamed history'])" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-6", "source": "

If you are interested in more details about a given resource, you just need to append its id to the previous collection request, e.g. to the get more info for a history:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-7", "source": [ "hist0_id = hists[0]['id']\n", "print(hist0_id)\n", "r = requests.get(base_url + \"/histories/\" + hist0_id, headers=headers)\n", "pprint(r.json())" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-8", "source": "

As you can see, there are much more entries in the returned dictionary, e.g.:

\n\n

To get the list of datasets contained in a history, simply append /contents to the previous resource request.

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-9", "source": [ "r = requests.get(base_url + \"/histories/\" + hist0_id + \"/contents\", headers=headers)\n", "hdas = r.json()\n", "pprint(hdas)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-10", "source": "

The dictionaries returned when GETting the history content give basic info about each dataset, e.g.: id, name, deleted, state, url

\n

To get the details about a specific dataset, you can use the datasets controller:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-11", "source": [ "hda0_id = hdas[0]['id']\n", "print(hda0_id)\n", "r = requests.get(base_url + \"/datasets/\" + hda0_id, headers=headers)\n", "pprint(r.json())" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-12", "source": "

Some of the interesting additional dictionary entries are:

\n\n

New resources are created with POST requests. The uploaded data needs to be serialized in a JSON string. For example, to create a new history:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-13", "source": [ "data = {'name': 'New history'}\n", "r = requests.post(base_url + \"/histories\", data=json.dumps(data), headers=headers)\n", "new_hist = r.json()\n", "pprint(new_hist)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-14", "source": "

The return value of a POST request is a dictionary with detailed info about the created resource.

\n

To update a resource, make a PUT request, e.g. to change the history name:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-15", "source": [ "data = {'name': 'Updated history'}\n", "r = requests.put(base_url + \"/histories/\" + new_hist[\"id\"], json.dumps(data), headers=headers)\n", "print(r.status_code)\n", "pprint(r.json())" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-16", "source": "

The return value of a PUT request is usually a dictionary with detailed info about the updated resource.

\n

Finally to delete a resource, make a DELETE request, e.g.:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-17", "source": [ "r = requests.delete(base_url + \"/histories/\" + new_hist[\"id\"], headers=headers)\n", "print(r.status_code)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-18", "source": "

Exercise: Galaxy API

\n

Goal: Upload a file to a new history, import a workflow and run it on the uploaded dataset.

\n
\n
Question: Initialise
\n

First, define the connection parameters. What variables do you need?

\n
👁 View solution\n
\n
import json\nfrom pprint import pprint\nfrom urllib.parse import urljoin\n\nimport requests\n\nserver = 'https://usegalaxy.eu/'\napi_key = ''\nbase_url = urljoin(server, 'api')\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-19", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-20", "source": "
\n
Question: New History
\n

Next, create a new Galaxy history via POST to the correct API.

\n
👁 View solution\n
\n
headers = {\"Content-Type\": \"application/json\", \"x-api-key\": api_key}\ndata = {\"name\": \"New history\"}\nr = requests.post(base_url + \"/histories\", data=json.dumps(data), headers=headers)\nnew_hist = r.json()\npprint(new_hist)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-21", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-22", "source": "
\n
Question: Upload a dataset
\n

Upload the local file 1.txt to the new history. You need to run the special upload1 tool by making a POST request to /api/tools. You don’t need to pass any inputs to it apart from attaching the file as files_0|file_data. Also, note that when attaching a file you need to drop Content-Type from the request headers.

\n

You can obtain the 1.txt file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt\n
\n
👁 View solution\n
\n
data = {\n    \"history_id\": new_hist[\"id\"],\n    \"tool_id\": \"upload1\"\n}\nwith open(\"1.txt\", \"rb\") as f:\n    files = {\"files_0|file_data\": f}\n    r = requests.post(base_url + \"/tools\", data=data, files=files, headers={\"x-api-key\": api_key})\nret = r.json()\npprint(ret)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-23", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-24", "source": "
\n
Question: Find the dataset in your history
\n

Find the new uploaded dataset, either from the dict returned by the POST request above or from the history contents.

\n
👁 View solution\n
\n
hda = ret['outputs'][0]\npprint(hda)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-25", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-26", "source": "
\n
Question: Import a workflow
\n

Import a workflow from the local file convert_to_tab.ga by making a POST request to /api/workflows. The only needed data is workflow, which must be a deserialized JSON representation of the workflow.

\n

You can obtain the convert_to_tab.ga file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/convert_to_tab.ga\n
\n
👁 View solution\n
\n
with open(\"convert_to_tab.ga\", \"r\") as f:\n    workflow_json = json.load(f)\ndata = {'workflow': workflow_json}\nr = requests.post(base_url + \"/workflows\", data=json.dumps(data), headers=headers)\nwf = r.json()\npprint(wf)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-27", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-28", "source": "
\n
Question: View the workflow details
\n

View the details of the imported workflow by making a GET request to /api/workflows.

\n
👁 View solution\n
\n
r = requests.get(base_url + \"/workflows/\" + wf[\"id\"], headers=headers)\nwf = r.json()\npprint(wf)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-29", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-30", "source": "
\n
Question: Invoke the workflow
\n

Run the imported workflow on the uploaded dataset inside the same history by making a POST request to /api/workflows/WORKFLOW_ID/invocations. The only needed data are history and inputs.

\n
👁 View solution\n
\n
inputs = {0: {'id': hda['id'], 'src': 'hda'}}\ndata = {\n    'history': 'hist_id=' + new_hist['id'],\n    'inputs': inputs}\nr = requests.post(base_url + \"/workflows/\" + wf[\"id\"] + \"/invocations\", data=json.dumps(data), headers=headers)\npprint(r.json())\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-31", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-32", "source": "
\n
Question: View the results
\n

View the results on the Galaxy server with your web browser. Were you successful? Did it run?

\n
\n

Interacting with histories in BioBlend

\n

You need to insert the API key for your Galaxy server in the cell below:

\n
    \n
  1. Open the Galaxy server in another browser tab
  2. \n
  3. Click on “User” on the top menu, then “Preferences”
  4. \n
  5. Click on “Manage API key”
  6. \n
  7. Generate an API key if needed, then copy the alphanumeric string and paste it as the value of the api_key variable below.
  8. \n
\n

The user interacts with a Galaxy server through a GalaxyInstance object:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-33", "source": [ "from pprint import pprint\n", "\n", "import bioblend.galaxy\n", "\n", "server = 'https://usegalaxy.eu/'\n", "api_key = ''\n", "gi = bioblend.galaxy.GalaxyInstance(url=server, key=api_key)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-34", "source": "

The GalaxyInstance object gives you access to the various controllers, i.e. the resources you are dealing with, like histories, tools and workflows.\nTherefore, method calls will have the format gi.controller.method(). For example, the call to retrieve all histories owned by the current user is:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-35", "source": [ "pprint(gi.histories.get_histories())" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-36", "source": "

As you can see, methods in BioBlend do not return JSON strings, but deserialize them into Python data structures. In particular, get_ methods return a list of dictionaries.

\n

Each dictionary gives basic info about a resource, e.g. for a history you have:

\n\n

New resources are created with create_ methods, e.g. the call to create a new history is:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-37", "source": [ "new_hist = gi.histories.create_history(name='BioBlend test')\n", "pprint(new_hist)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-38", "source": "

As you can see, to make POST requests in BioBlend it is not necessary to serialize data, you just pass them explicitly as parameters. The return value is a dictionary with detailed info about the created resource.

\n

get_ methods usually have filtering capabilities, e.g. it is possible to filter histories by name:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-39", "source": [ "pprint(gi.histories.get_histories(name='BioBlend test'))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-40", "source": "

To upload the local file 1.txt to the new history, you can run the special upload tool by calling the upload_file method of the tools controller.

\n

You can obtain the 1.txt file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-41", "source": [ "hist_id = new_hist[\"id\"]\n", "pprint(gi.tools.upload_file(\"1.txt\", hist_id))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-42", "source": "

If you are interested in more details about a given resource for which you know the id, you can use the corresponding show_ method. For example, to the get more info for the history we have just populated:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-43", "source": [ "pprint(gi.histories.show_history(history_id=hist_id))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-44", "source": "

As you can see, there are much more entries in the returned dictionary, e.g.:

\n\n

To get the list of datasets contained in a history, simply add contents=True to the previous call.

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-45", "source": [ "hdas = gi.histories.show_history(history_id=hist_id, contents=True)\n", "pprint(hdas)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-46", "source": "

The dictionaries returned when showing the history content give basic info about each dataset, e.g.: id, name, deleted, state, url

\n

To get the details about a specific dataset, you can use the datasets controller:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-47", "source": [ "hda0_id = hdas[0]['id']\n", "print(hda0_id)\n", "pprint(gi.datasets.show_dataset(hda0_id))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-48", "source": "

Some of the interesting additional dictionary entries are:

\n\n

To update a resource, use the update_ method, e.g. to change the name of the new history:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-49", "source": [ "pprint(gi.histories.update_history(new_hist['id'], name='Updated history'))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-50", "source": "

The return value of update_ methods is usually a dictionary with detailed info about the updated resource.

\n

Finally to delete a resource, use the delete_ method, e.g.:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-51", "source": [ "pprint(gi.histories.delete_history(new_hist['id']))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-52", "source": "

Exercise: BioBlend

\n

Goal: Upload a file to a new history, import a workflow and run it on the uploaded dataset.

\n
\n
Question: Initialise
\n

Create a GalaxyInstance object.

\n
👁 View solution\n
\n
from pprint import pprint\n\nimport bioblend.galaxy\n\nserver = 'https://usegalaxy.eu/'\napi_key = ''\ngi = bioblend.galaxy.GalaxyInstance(url=server, key=api_key)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-53", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-54", "source": "
\n
Question: New History
\n

Create a new Galaxy history.

\n
👁 View solution\n
\n
new_hist = gi.histories.create_history(name='New history')\npprint(new_hist)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-55", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-56", "source": "
\n
Question: Upload a dataset
\n

Upload the local file 1.txt to the new history using tools.upload_file() .

\n

You can obtain the 1.txt file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt\n
\n
👁 View solution\n
\n
ret = gi.tools.upload_file(\"1.txt\", new_hist[\"id\"])\npprint(ret)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-57", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-58", "source": "
\n
Question: Find the dataset in your history
\n

Find the new uploaded dataset, either from the dict returned by tools.upload_file() or from the history contents.

\n
👁 View solution\n
\n
hda = ret['outputs'][0]\npprint(hda)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-59", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-60", "source": "
\n
Question: Import a workflow
\n

Import a workflow from the local file convert_to_tab.ga using workflows.import_workflow_from_local_path() .

\n

You can obtain the convert_to_tab.ga file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/convert_to_tab.ga\n
\n
👁 View solution\n
\n
wf = gi.workflows.import_workflow_from_local_path(\"convert_to_tab.ga\")\npprint(wf)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-61", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-62", "source": "
\n
Question: View the workflow details
\n

View the details of the imported workflow using workflows.show_workflow()

\n
👁 View solution\n
\n
wf = gi.workflows.show_workflow(wf['id'])\npprint(wf)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-63", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-64", "source": "
\n
Question: Invoke the workflow
\n

Run the imported workflow on the uploaded dataset inside the same history using workflows.invoke_workflow() .

\n
👁 View solution\n
\n
inputs = {0: {'id': hda['id'], 'src': 'hda'}}\nret = gi.workflows.invoke_workflow(wf['id'], inputs=inputs, history_id=new_hist['id'])\npprint(ret)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-65", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-66", "source": "
\n
Question: View the results
\n

View the results on the Galaxy server with your web browser. Were you successful? Did it run?

\n
\n

Interacting with histories in BioBlend.objects

\n

You need to insert the API key for your Galaxy server in the cell below:

\n
    \n
  1. Open the Galaxy server in another browser tab
  2. \n
  3. Click on “User” on the top menu, then “Preferences”
  4. \n
  5. Click on “Manage API key”
  6. \n
  7. Generate an API key if needed, then copy the alphanumeric string and paste it as the value of the api_key variable below.
  8. \n
\n

The user interacts with a Galaxy server through a GalaxyInstance object:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-67", "source": [ "from pprint import pprint\n", "\n", "import bioblend.galaxy.objects\n", "\n", "server = 'https://usegalaxy.eu/'\n", "api_key = ''\n", "gi = bioblend.galaxy.objects.GalaxyInstance(url=server, api_key=api_key)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-68", "source": "

All GalaxyInstance method calls have the client.method() format, where client is the name of the resources you dealing with. There are 2 methods to get the list of resources:

\n\n

For example, the call to retrieve previews of all histories owned by the current user is:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-69", "source": [ "pprint(gi.histories.get_previews())" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-70", "source": "

New resources are created with create() methods, e.g. to create a new history:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-71", "source": [ "new_hist = gi.histories.create(name='BioBlend test')\n", "new_hist" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-72", "source": "

As you can see, the create() methods in BioBlend.objects returns an object, not a dictionary.

\n

Both get_previews() and list() methods usually have filtering capabilities, e.g. it is possible to filter histories by name:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-73", "source": [ "pprint(gi.histories.list(name='BioBlend test'))" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-74", "source": "

To upload the local file 1.txt to the new history, you can run the special upload tool by calling the upload_file method of the History object.

\n

You can obtain the 1.txt file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-75", "source": [ "hda = new_hist.upload_file(\"1.txt\")\n", "hda" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-76", "source": "

Please note that with BioBlend.objects there is no need to find the upload dataset, since upload_file() already returns a HistoryDatasetAssociation object.

\n

Both HistoryPreview and History objects have many of their properties available as attributes, e.g. the id.

\n

If you need to specify the unique id of the resource to retrieve, you can use the get() method, e.g. to get back the history we created before:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-77", "source": [ "gi.histories.get(new_hist.id)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-78", "source": "

To get the list of datasets contained in a history, simply look at the content_infos attribute of the History object.

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-79", "source": [ "pprint(new_hist.content_infos)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-80", "source": "

To get the details about one dataset, you can use the get_dataset() method of the History object:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-81", "source": [ "new_hist.get_dataset(hda.id)" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-82", "source": "

You can also filter history datasets by name using the get_datasets() method of History objects.

\n

To update a resource, use the update() method of its object, e.g. to change the history name:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-83", "source": [ "new_hist.update(name='Updated history')" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-84", "source": "

The return value of update() methods is the updated object.

\n

Finally to delete a resource, you can use the delete() method of the object, e.g.:

\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-85", "source": [ "new_hist.delete()" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-86", "source": "

Exercise: BioBlend.objects

\n

Goal: Upload a file to a new history, import a workflow and run it on the uploaded dataset.

\n
\n
Question: Initialise
\n

Create a GalaxyInstance object.

\n
👁 View solution\n
\n
from pprint import pprint\n\nimport bioblend.galaxy\n\nserver = 'https://usegalaxy.eu/'\napi_key = ''\ngi = bioblend.galaxy.objects.GalaxyInstance(url=server, api_key=api_key)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-87", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-88", "source": "
\n
Question: New History
\n

Create a new Galaxy history.

\n
👁 View solution\n
\n
new_hist = gi.histories.create(name='New history')\nnew_hist\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-89", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-90", "source": "
\n
Question: Upload a dataset
\n

Upload the local file 1.txt to the new history using the upload_file() method of History objects.

\n

You can obtain the 1.txt file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/1.txt\n
\n
👁 View solution\n
\n
hda = new_hist.upload_file(\"1.txt\")\nhda\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-91", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-92", "source": "
\n
Question: Import a workflow
\n

Import a workflow from the local file convert_to_tab.ga using workflows.import_new()

\n

You can obtain the convert_to_tab.ga file from the following URL, you’ll need to download it first.

\n
https://raw.githubusercontent.com/nsoranzo/bioblend-tutorial/main/test-data/convert_to_tab.ga\n
\n
👁 View solution\n
\n
with open(\"convert_to_tab.ga\", \"r\") as f:\n    wf_string = f.read()\nwf = gi.workflows.import_new(wf_string)\nwf\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-93", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-94", "source": "
\n
Question: View the workflow inputs
\n
👁 View solution\n
\n
wf.inputs\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-95", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-96", "source": "
\n
Question: Invoke the workflow
\n

Run the imported workflow on the uploaded dataset inside the same history using the invoke() method of Workflow objects.

\n
👁 View solution\n
\n
inputs = {'0': hda}\nwf.invoke(inputs=inputs, history=new_hist)\n
\n
\n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "id": "cell-97", "source": [ "# Try it out here!\n", "" ], "cell_type": "code", "execution_count": null, "outputs": [ ], "metadata": { "attributes": { "classes": [ "python" ], "id": "" } } }, { "id": "cell-98", "source": "
\n
Question: View the results
\n

View the results on the Galaxy server with your web browser. Were you successful? Did it run?

\n
\n

Optional Extra Exercises

\n

If you have completed the exercise, you can try to perform these extra tasks with the help of the online documentation:

\n
    \n
  1. Download the workflow result to your computer
  2. \n
  3. Publish your history
  4. \n
\n", "cell_type": "markdown", "metadata": { "editable": false, "collapsed": false } }, { "cell_type": "markdown", "id": "final-ending-cell", "metadata": { "editable": false, "collapsed": false }, "source": [ "# Key Points\n\n", "- The API allows you to use Galaxy's capabilities programmatically.\n", "- BioBlend makes using the Galaxy API from Python easier.\n", "- BioBlend objects is an object-oriented interface for interacting with Galaxy.\n", "\n# Congratulations on successfully completing this tutorial!\n\n", "Please [fill out the feedback on the GTN website](https://training.galaxyproject.org/training-material/topics/dev/tutorials/bioblend-api/tutorial.html#feedback) and check there for further resources!\n" ] } ] }