add some pictures and howto to playground

This commit is contained in:
Cornelius Specht 2023-06-22 15:49:29 +02:00
parent c193f694ef
commit 808a66b442
10 changed files with 90 additions and 36 deletions

View File

@ -1,6 +1,6 @@
# Hardware
Warum haben wir uns für diese Konfiguration entschieden?
The server configuration was chosen to best cover the various requirements. The budget of XXX was too small to build a high-performance AI cluster and would also have been the wrong approach for this use case. This is because a configuration was needed where smaller AI models could be trained and deployed using GPU power, providing students with a platform on which to experiment/learn. Therefore, we opted for a rather unusual configuration with high performance CPU's, high performance GPU and a lot of Memory. With the help of the CPU, many students can experiment and gain initial experience. If the performance of the CPU's is no longer sufficient, you have the possibility to start a GPU image, in which a limited GPU capacity is available. If this is also no longer sufficient and you want to train your own model over several hours or days, for example, you can use Git and CI to write your own pipeline that allows you to train the model. However, this involves some technical effort and is therefore only recommended for scientific users. To ensure that this does not affect the normal training process or experiments of the students, it was necessary to ensure that there is always enough memory available for the A100 GPU to work correctly.
## Sandbox Server Configuration

View File

@ -1,34 +1,33 @@
# Architecture Overview (mixed use->praesi)
Für das System lassen sich folgende szenarien ableiten, zum einen soll das System für die Lehre eingesetzt werden und zum anderen soll das System für das trainieren von Daten genutzt werden.
Daher wird eine Mixed-Use Architektur bereitgestellt, das ermöglicht das System für diese zwei unterschiedliche Zwecke/Anforderungen einzusetzen. Hierzu wurden folgendes Rollenkonzept entworfen:
The following scenarios can be derived for the system: on the one hand, the system is to be used for teaching and on the other hand, the system is to be used for training.
Therefore, a mixed-use architecture is provided, which allows the system to be used for these two different requirements, hence following role concept has been designed:
# Rollen
# Roles
## User
* Lehre/Use Cases im Bereich Ethik, Wirtschaft und Recht
* Aufgaben Bearbeitung
* Benotung
* Beispiele / Demos bereitstellen
* Teaching/Use Cases in the area of ethics, business and law.
* Assignments Editing
* Grading
* Provide examples / demos
## advanced User
* Lehre/Use Cases im Bereich Technik
* Aufgaben Bearbeitung
* Benotung
* (Eigene) Beispiele/Demos bereitstellen/entwickeln
* Teaching/Use Cases in the field of technology
* Tasks Editing
* Grading
* Provide/develop (own) examples/demos.
## scientific User
* Datengenerierung mithilfe von MAX
* GPU Leistung
* Data generation using MAX
* GPU performance
* CI/CD/GIT
* Storage
* SANDBOX MM Platform
folgende technische Anforderungen lassen sich daraus ableiten:
* Sandbox-NB zum testen von Beispielen und bearbeiten von Aufgaben (GPU Begrenzung auf 1GB pro User)
* 1 GB Speicherplatz pro User
following technical requirements can be derived:
* Sandbox NB for testing examples and editing tasks (GPU limit of 1GB per user).
* 1GB memory per user
* CPU 0.5/ limit 2
* Datenebereitstellung durch User:
* REST-Schnittstelle
* Data provisioning by user:
* REST-Interface
* Git Large File Storage
* Upload File inside Sandbox-NB (GUI)

View File

@ -2,7 +2,7 @@
resources
## images on Sandbox (datascience and GPU)
## images on Sandbox
Which python packages are installed, How can I install a python package?
@ -14,3 +14,5 @@ Which python packages are installed, How can I install a python package?
* Available GPU image is based on [Official GPU Image](https://hub.docker.com/r/cschranz/gpu-jupyter)
* support added for the NVIDIA GPU A100 calculations based on python most common GPU-able libraries Tensorflow, PyTorch and Keras.
### CLEANUP --> at the end of each Semester

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -1,19 +1,72 @@
# Services
## Status
![sandbox](https://uptime.monitoring.iuk.hdm-stuttgart.de/api/badge/1/status)
## Playground
# Services                                                   ![sandbox](https://uptime.monitoring.iuk.hdm-stuttgart.de/api/badge/1/status)
## Training
# Playground
## Storage
### Sandbox
1gb
Within the sandbox, the different disciplines inside the IKID project can provide tasks to be worked on by the respective student groups. Both text-based tasks and programmatic tasks can be provided and processed. For example, Markdown files can be created for editing textual tasks. These files can be converted from the source form (unformatted) to the target form (formatted) using a simple syntax.
Currently, it is planned for the technical lectures that the students get first in touch with the programming language Python. Therefore, the Sandbox platform was created, in which experiments with Python can be carried out. But if needed, it is possible to add more supported languages in the future.
### Tmp Binary
[Sandbox](https://sandbox.iuk.hdm-stuttgart.de/)
(only accessible from the HdM-Network)
1. **Sign in**, use your **HdM Credentials**
2. Select the image you want to start (two options)
1. **Datascience environment**
2. **GPU environment** (choose only if you realy need the graphic card, otherwise you steal resources from those who need them)
3. Create or upload a .ipynb file to start with
1. . **create a empty .ipynb file:**
![sandbox launcher](res/sandbox_launcher.png "Sandbox Launcher")
2. **upload a existing .ipynb file:**
![sandbox upload file](res/sandbox_upload_file_selector.png "Sandbox upload file")
4. **open** the file from the filebrowser & start working!
![sandbox .ipynb file](res/sandbox_ipynb_example.png "Sandbox Notebook").
5. **After you finished your work dont forget to shutdown your server!** Therfore you should shutdown your server to release server resources. **Select File, Hub Control Panel**
![file menue](res/sandbox_file_menu.png "File menue")
7. Select **Stop Server**
![stop server](res/sandbox_stop_server.png "Stop Server")
1. Select the arrow in the upper right corner to **Logout**
![logout](res/sandbox_logout.png)
# Training
Coming Soon ...
# Storage
In the following section we describe how to store data on the Sandbox. There are three diffrent ways to do so: Inside the Sandbox, Object Storage, Git LFS.
## Inside the Sandbox
To store the data inside the Sandbox, you just have to drag & drop the file you want to store inside the filebrowser. You can also create folders and new Notebooks. The only limitation is that each user has 1GB of storage.
## Object storage
To use the Object storage, you can upload a file via REST-Interface and access it by the key you get provided in the response. If you want to upload your file:
**Upload Example**
```python
import requests
url = "https://share.storage.sandbox.iuk.hdm-stuttgart.de/upload"
filename = "ds_salaries.csv"
files = {'fileUpload': (filename, open(filename, 'rb'),'text/csv')}
r = requests.post(url, files=files)
```
**Usage Example**
```python
import pandas as pd
url = "https://storage.sandbox.iuk.hdm-stuttgart.de/upload/ec6c1c9c-ea9b-47ff-97cf-f92d870e8fb9/ds_salaries.csv"
df = pd.read_csv(url)
```
## Git LFS
The following solution we highly recomment only for users which are familiar with git command line tools! Git Large File Storage (LFS). An open source Git extension for versioning large files. Git LFS replaces large files (audio, sample, datasets, videos) by a text pointer inside git. The files get stored on our gitea Server.
For further information visit [Git LFS](https://git-lfs.com/).
## Tmp Binary
headless binary curl
### Data Pool
lfs