Skip to content

Commit 650c81c

Browse files
committed
chore: rename property endpoints
1 parent 56f9806 commit 650c81c

7 files changed

+119
-119
lines changed

examples/job/get-file-from-job.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@
134134
"from exabyte_api_client.endpoints.projects import ProjectEndpoints\n",
135135
"from exabyte_api_client.endpoints.materials import MaterialEndpoints\n",
136136
"from exabyte_api_client.endpoints.bank_workflows import BankWorkflowEndpoints\n",
137-
"from exabyte_api_client.endpoints.raw_properties import RawPropertiesEndpoints"
137+
"from exabyte_api_client.endpoints.properties import PropertiesEndpoints"
138138
]
139139
},
140140
{

examples/job/get-file-from-job.py

+25-25
Original file line numberDiff line numberDiff line change
@@ -6,43 +6,43 @@
66
# </a>
77

88
# # Get-File-From-Job
9-
#
9+
#
1010
# This example demonstrates how to use Mat3ra RESTful API to check for and acquire files from jobs which have been run. This example assumes that the user is already familiar with the [creation and submission of jobs](create_and_submit_jobs.ipynb) using our API.
11-
#
11+
#
1212
# > <span style="color: orange">**IMPORTANT NOTE**</span>: In order to run this example in full, an active Mat3ra.com account is required. Alternatively, Readers may substitute the workflow ID below with another one (an equivalent one for VASP, for example) and adjust extraction of the results ("Viewing job files" section). RESTful API credentials shall be updated in [settings](../../utils/settings.json).
13-
#
14-
#
13+
#
14+
#
1515
# ## Steps
16-
#
16+
#
1717
# After working through this notebook, you will be able to:
18-
#
18+
#
1919
# 1. Import [the structure of Si](https://materialsproject.org/materials/mp-149/) from Materials Project
2020
# 2. Set up and run a single-point calculation using Quantum Espresso.
2121
# 3. List files currently in the job's directory
2222
# 4. Check metadata for every file (modification date, size, etc)
2323
# 5. Access file contents directly and print them to console
2424
# 6. Download files to your local machine
25-
#
25+
#
2626
# ## Pre-requisites
27-
#
27+
#
2828
# The explanation below assumes that the reader is familiar with the concepts used in Mat3ra platform and RESTful API. We outline these below and direct the reader to the original sources of information:
29-
#
29+
#
3030
# - [Generating RESTful API authentication parameters](../system/get_authentication_params.ipynb)
3131
# - [Importing materials from materials project](../material/import_materials_from_materialsproject.ipynb)
3232
# - [Creating and submitting jobs](../job/create_and_submit_job.ipynb)
3333

3434
# # Complete Authorization Form and Initialize Settings
35-
#
35+
#
3636
# This will also determine environment and set all environment variables. We determine if we are using Jupyter Notebooks or Google Colab to run this tutorial.
37-
#
37+
#
3838
# If you are running this notebook from Google Colab, Colab takes ~1 min to execute the following cell.
39-
#
39+
#
4040
# ACCOUNT_ID and AUTH_TOKEN - Authentication parameters needed for when making requests to [Mat3ra.com's API Endpoints](https://docs.mat3ra.com/rest-api/endpoints/).
41-
#
41+
#
4242
# MATERIALS_PROJECT_API_KEY - Authentication parameter needed for when making requests to [Material Project's API](https://materialsproject.org/open)
43-
#
43+
#
4444
# ORGANIZATION_ID - Authentication parameter needed for when working with collaborative accounts https://docs.mat3ra.com/collaboration/organizations/overview/
45-
#
45+
#
4646
# > <span style="color: orange">**NOTE**</span>: If you are running this notebook from Jupyter, the variables ACCOUNT_ID, AUTH_TOKEN, MATERIALS_PROJECT_API_KEY, and ORGANIZATION_ID should be set in the file [settings.json](../../utils/settings.json) if you need to use these variables. To obtain API token parameters, please see the following link to the documentation explaining how to get them: https://docs.mat3ra.com/accounts/ui/preferences/api/
4747

4848
# In[ ]:
@@ -88,19 +88,19 @@
8888
from exabyte_api_client.endpoints.projects import ProjectEndpoints
8989
from exabyte_api_client.endpoints.materials import MaterialEndpoints
9090
from exabyte_api_client.endpoints.bank_workflows import BankWorkflowEndpoints
91-
from exabyte_api_client.endpoints.raw_properties import RawPropertiesEndpoints
91+
from exabyte_api_client.endpoints.properties import PropertiesEndpoints
9292

9393

9494
# ### Create and submit the job
95-
#
95+
#
9696
# For this job, we'll use the workflow located [here](https://platform.mat3ra.com/analytics/workflows/84DAjE9YyTFndx6z3).
97-
#
97+
#
9898
# This workflow is a single-point total energy calculation using Density-Functional Energy as-implemented in Quantum Espresso version 5.4.0.
99-
#
99+
#
100100
# The PBE functional is used in conjunction with an ultrasoft pseudopotential and a planewave basis set.
101-
#
101+
#
102102
# The material we will investigate is elemental [Silicon](https://materialsproject.org/materials/mp-149/), as-is from Materials Project.
103-
#
103+
#
104104
# > <span style="color: orange">Note</span>: This cell uses our API to copy the unit cell of silicon from Materials Project into your account. It then copies a workflow to get the total energy of a system using Quantum Espresso to your account. Finally, a job is created using the Quantum Espresso workflow for the silicon unit cell, and the job is submitted to the cluster. For more information, please refer to our [run-simulation-and-extract-properties](./run-simulations-and-extract-properties.ipynb) notebook, located in this directory.
105105

106106
# In[ ]:
@@ -137,7 +137,7 @@
137137

138138
# ## Viewing job files
139139
# ### Retreive a list of job files
140-
#
140+
#
141141
# Here, we'll get a list of all files that belong to the job.
142142

143143
# In[ ]:
@@ -152,7 +152,7 @@
152152

153153
# ### Get metadata for the Output File
154154
# The .out file is where Quantum Espresso shows its work and prints its results, so you most likely will want to view this files. Let's print out some of its metadata.
155-
#
155+
#
156156
# You'll find that we get a lot of data describing the file and its providence. Brief explanations of each entry are:
157157
# - Key - Path to the file on the cluster
158158
# - size - Size of the file, in bytes.
@@ -173,7 +173,7 @@
173173

174174

175175
# ### Display file contents to console
176-
#
176+
#
177177
# The signedUrl gives us a place to access the file and download it. Let's read it into memory, and print out the last few lines of our job.
178178

179179
# In[ ]:
@@ -195,7 +195,7 @@
195195

196196

197197
# ### Save the input file and output file to disk.
198-
#
198+
#
199199
# Now that we've verified the job is done, let's go ahead and save it and its input to disk.
200200

201201
# In[ ]:

examples/job/ml-train-model-predict-properties.ipynb

+4-4
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@
142142
"from exabyte_api_client.endpoints.materials import MaterialEndpoints\n",
143143
"from exabyte_api_client.endpoints.workflows import WorkflowEndpoints\n",
144144
"from exabyte_api_client.endpoints.bank_workflows import BankWorkflowEndpoints\n",
145-
"from exabyte_api_client.endpoints.raw_properties import RawPropertiesEndpoints"
145+
"from exabyte_api_client.endpoints.properties import PropertiesEndpoints"
146146
]
147147
},
148148
{
@@ -253,7 +253,7 @@
253253
"material_endpoints = MaterialEndpoints(*ENDPOINT_ARGS)\n",
254254
"workflow_endpoints = WorkflowEndpoints(*ENDPOINT_ARGS)\n",
255255
"bank_workflow_endpoints = BankWorkflowEndpoints(*ENDPOINT_ARGS)\n",
256-
"raw_property_endpoints = RawPropertiesEndpoints(*ENDPOINT_ARGS)"
256+
"property_endpoints = PropertiesEndpoints(*ENDPOINT_ARGS)"
257257
]
258258
},
259259
{
@@ -497,7 +497,7 @@
497497
"outputs": [],
498498
"source": [
499499
"ml_predict_workflow = get_property_by_subworkflow_and_unit_indicies(\n",
500-
" raw_property_endpoints, \"workflow:ml_predict\", job, 0, 4\n",
500+
" property_endpoints, \"workflow:ml_predict\", job, 0, 4\n",
501501
")[\"data\"]\n",
502502
"ml_predict_workflow_id = ml_predict_workflow[\"_id\"]"
503503
]
@@ -620,7 +620,7 @@
620620
"outputs": [],
621621
"source": [
622622
"predicted_properties = get_property_by_subworkflow_and_unit_indicies(\n",
623-
" raw_property_endpoints, \"predicted_properties\", job, 0, 3\n",
623+
" property_endpoints, \"predicted_properties\", job, 0, 3\n",
624624
")[\"data\"][\"values\"]"
625625
]
626626
},

examples/job/ml-train-model-predict-properties.py

+34-34
Original file line numberDiff line numberDiff line change
@@ -6,43 +6,43 @@
66
# </a>
77

88
# # Overview
9-
#
9+
#
1010
# This example demonstrates how to use Mat3ra RESTful API to build a machine learning (ML) model for a set of materials called "train materials" and use the model to predict properties of another set called "target materials". The general approach can work for multiple properties, we use the Electronic Band Gap in this example.
11-
#
12-
#
13-
#
11+
#
12+
#
13+
#
1414
# ## Steps
15-
#
15+
#
1616
# We follow the below steps:
17-
#
17+
#
1818
# - Import materials from [materials project](https://materialsproject.org/)
1919
# - Calculate band gap for the "train materials"
2020
# - Build ML Train model based on the "train materials"
2121
# - Create and submit a job to predict band gap for the "target materials"
2222
# - Extract band gap for "target materials"
2323
# - Output the results as Pandas dataFrame
24-
#
24+
#
2525
# ## Pre-requisites
26-
#
26+
#
2727
# The explanation below assumes that the reader is familiar with the concepts used in Mat3ra platform and RESTful API. We outline these below and direct the reader to the original sources of information:
28-
#
28+
#
2929
# - [Generating RESTful API authentication parameters](../system/get_authentication_params.ipynb)
3030
# - [Importing materials from materials project](../material/import_materials_from_materialsproject.ipynb)
3131
# - [Creating and submitting jobs](./create_and_submit_job.ipynb)
3232
# - [Running DFT calculations](./run-simulations-and-extract-properties.ipynb)
3333

3434
# # Complete Authorization Form and Initialize Settings
35-
#
35+
#
3636
# This will also determine environment and set all environment variables. We determine if we are using Jupyter Notebooks or Google Colab to run this tutorial.
37-
#
37+
#
3838
# If you are running this notebook from Google Colab, Colab takes ~1 min to execute the following cell.
39-
#
39+
#
4040
# ACCOUNT_ID and AUTH_TOKEN - Authentication parameters needed for when making requests to [Mat3ra.com's API Endpoints](https://docs.mat3ra.com/rest-api/endpoints/).
41-
#
41+
#
4242
# MATERIALS_PROJECT_API_KEY - Authentication parameter needed for when making requests to [Material Project's API](https://materialsproject.org/open)
43-
#
43+
#
4444
# ORGANIZATION_ID - Authentication parameter needed for when working with collaborative accounts https://docs.mat3ra.com/collaboration/organizations/overview/
45-
#
45+
#
4646
# > <span style="color: orange">**NOTE**</span>: If you are running this notebook from Jupyter, the variables ACCOUNT_ID, AUTH_TOKEN, MATERIALS_PROJECT_API_KEY, and ORGANIZATION_ID should be set in the file [settings.json](../../utils/settings.json) if you need to use these variables. To obtain API token parameters, please see the following link to the documentation explaining how to get them: https://docs.mat3ra.com/accounts/ui/preferences/api/
4747

4848
# In[ ]:
@@ -96,13 +96,13 @@
9696
from exabyte_api_client.endpoints.materials import MaterialEndpoints
9797
from exabyte_api_client.endpoints.workflows import WorkflowEndpoints
9898
from exabyte_api_client.endpoints.bank_workflows import BankWorkflowEndpoints
99-
from exabyte_api_client.endpoints.raw_properties import RawPropertiesEndpoints
99+
from exabyte_api_client.endpoints.properties import PropertiesEndpoints
100100

101101

102102
# #### Materials
103-
#
103+
#
104104
# Set parameters for the materials to be imported:
105-
#
105+
#
106106
# - **TRAIN_MATERIALS_PROJECT_IDS**: a list of material IDs to train ML model based on
107107
# - **TARGET_MATERIALS_PROJECT_IDS**: a list of material IDs to predict the property for
108108

@@ -114,9 +114,9 @@
114114

115115

116116
# #### Jobs
117-
#
117+
#
118118
# Set parameters for the jobs to be ran for the imported materials:
119-
#
119+
#
120120
# - **JOB_NAME_PREFIX**: prefix to be used for the job name with "{JOB_NAME_PREFIX} {FORMULA}" convention (e.g. "Job Name Prefix - SiGe")
121121

122122
# In[ ]:
@@ -126,9 +126,9 @@
126126

127127

128128
# #### Compute
129-
#
129+
#
130130
# Setup compute parameters. See [this](https://docs.mat3ra.com/infrastructure/compute-settings/ui) for more information about compute parameters.
131-
#
131+
#
132132
# - **NODES**: Number of nodes. Defaults to 1.
133133
# - **PPN**: Number of MPI processes per each node, Defaults to 1.
134134
# - **QUEUE**: The name of queue to submit the jobs into. Defaults to D.
@@ -155,7 +155,7 @@
155155
material_endpoints = MaterialEndpoints(*ENDPOINT_ARGS)
156156
workflow_endpoints = WorkflowEndpoints(*ENDPOINT_ARGS)
157157
bank_workflow_endpoints = BankWorkflowEndpoints(*ENDPOINT_ARGS)
158-
raw_property_endpoints = RawPropertiesEndpoints(*ENDPOINT_ARGS)
158+
property_endpoints = PropertiesEndpoints(*ENDPOINT_ARGS)
159159

160160

161161
# Retrieve the owner and project IDs as they are needed by the endpoints. The default material is used to extract the owner ID. One can extract the owner ID from any other account's [entities](https://docs.mat3ra.com/entities-general/overview/).
@@ -168,7 +168,7 @@
168168

169169

170170
# ### Create workflows
171-
#
171+
#
172172
# Copy "ML: Train Model" and "Band Gap" bank workflows to the account's workflows. We use exabyte bank workflows which are identified by "systemName" field. The below can be adjusted to get the bank workflows by ID.
173173

174174
# In[ ]:
@@ -179,7 +179,7 @@
179179

180180

181181
# ### Import materials
182-
#
182+
#
183183
# Import materials from materials project.
184184

185185
# In[ ]:
@@ -194,7 +194,7 @@
194194

195195

196196
# ### Calculate Properties for "train materials"
197-
#
197+
#
198198
# Create jobs for the "train materials".
199199

200200
# In[ ]:
@@ -225,7 +225,7 @@
225225

226226

227227
# ### Build ML Train model
228-
#
228+
#
229229
# Create ML Train job for the train materials.
230230

231231
# In[ ]:
@@ -254,14 +254,14 @@
254254

255255

256256
# ### Extract ML model as workflow
257-
#
257+
#
258258
# The resulting trained model is extracted from the last unit (train with index 4) of the first job's subworkflow (ML: Train Model with index 0) and is further referred to as "ML predict workflow".
259259

260260
# In[ ]:
261261

262262

263263
ml_predict_workflow = get_property_by_subworkflow_and_unit_indicies(
264-
raw_property_endpoints, "workflow:ml_predict", job, 0, 4
264+
property_endpoints, "workflow:ml_predict", job, 0, 4
265265
)["data"]
266266
ml_predict_workflow_id = ml_predict_workflow["_id"]
267267

@@ -275,7 +275,7 @@
275275

276276

277277
# ### Predict property using the model
278-
#
278+
#
279279
# Create ML Predict job for the predict materials.
280280

281281
# In[ ]:
@@ -304,19 +304,19 @@
304304

305305

306306
# ### Extract predicted properties
307-
#
307+
#
308308
# Predicted properties are extracted from the last unit (score with index 3) of the first job's subworkflow (ml_predict_subworkflow with index 0).
309309

310310
# In[ ]:
311311

312312

313313
predicted_properties = get_property_by_subworkflow_and_unit_indicies(
314-
raw_property_endpoints, "predicted_properties", job, 0, 3
314+
property_endpoints, "predicted_properties", job, 0, 3
315315
)["data"]["values"]
316316

317317

318318
# ### Flatten results
319-
#
319+
#
320320
# The below for-loop iterates over the results and flatten them to form the final Pandas dataFrame.
321321

322322
# In[ ]:
@@ -334,7 +334,7 @@
334334

335335

336336
# ### Ouput results
337-
#
337+
#
338338
# Create and print the final table as Pandas dataFrame.
339339

340340
# In[ ]:

0 commit comments

Comments
 (0)