Local Development Setup¶
Prerequisites¶
Preperation¶
Setup Ceph Cluster¶
Reference: rook docs
Install cert-manager¶
If there is no cert-manager present in the cluster it needs to be installed.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
Setup ironcore
¶
Reference: ironcore docs
Setup Rook
¶
-
Install Rook operator and
CRD
skubectl apply -k ./rook
-
Verify the rook-ceph-operator is in the Running state before proceeding
kubectl -n rook-ceph get pod
-
Create a Rook Ceph Cluster see: Rook Docs
-
Verify cluster installation. List all rook pods again:
In the end you should see all podskubectl -n rook-ceph get pod
Running
orCompleted
and have at least onerook-ceph-osd-*
Pod:NAME READY STATUS RESTARTS AGE csi-cephfsplugin-b7ktv 3/3 Running 0 63d csi-cephfsplugin-provisioner-59499cbcdd-wvnfq 6/6 Running 0 63d csi-rbdplugin-bs4tn 3/3 Running 6 63d csi-rbdplugin-provisioner-857d65496c-mxjp4 6/6 Running 0 63d rook-ceph-mgr-a-769964c967-9kmxq 1/1 Running 0 26d rook-ceph-mon-a-66b5cfc47f-8d4ts 1/1 Running 0 63d rook-ceph-operator-75c6d6bbfc-b9q9n 1/1 Running 0 63d rook-ceph-osd-0-7464fbbd49-szdrp 1/1 Running 0 63d rook-ceph-osd-prepare-minikube-7t4mk 0/1 Completed 0 6d8h
-
Deploy a
CephCluster
Ensure that the cluster is inkubectl apply -f ./rook/cluster.yaml
Ready
phase
kubectl get cephcluster -A
- Deploy a
CephBlockPool
,CephObjectStore
&StorageClass
kubectl apply -f ./rook/pool.yaml
Clone the Repository¶
To bring up and start locally the ceph-provider
project for development purposes you first need to clone the repository.
git clone git@github.com:ironcore-dev/ceph-provider.git
cd ceph-provider
Build the ceph-provider
¶
-
Build the
ceph-volume-provider
make build-volume
-
Build the
ceph-bucket-provider
make build-bucket
Run the ceph-volume-provider
¶
The required ceph-provider
flags needs to be defined in order to connect to ceph.
The following command starts a ceph-volume-provider
and connects to a local ceph
cluster.
go run ./cmd/volumeprovider/main.go \
--address=./iri-volume.sock
--supported-volume-classes=./classes.json
--zap-log-level=2
--ceph-key-file=./key
--ceph-monitors=192.168.64.23:6789
--ceph-user=admin
--ceph-pool=ceph-provider-pool
--ceph-client=client.ceph-provider-pool
Sample supported-volume-classes.json
file:
[
{
"name": "experimental",
"capabilities": {
"tps": 262144000,
"iops": 15000
}
}
]
The ceph key
can be retrieved from the keyring by decoding (base64) the keyring and using only the key
.
kubectl get secrets -n rook-ceph rook-ceph-admin-keyring -o yaml
Run the ceph-bucket-provider
¶
The required ceph-provider
flags needs to be defined in order to work with rook.
The following command starts a ceph-bucket-provider
.
The flag bucket-pool-storage-class-name
defines the StorageClass
and hereby implicit the CephBlockPool
(see rook docs).
go run ./cmd/bucketprovider/main.go \
--address=./iri-bucket.sock
--bucket-pool-storage-class-name=rook-ceph-bucket
Interact with the ceph-provider
¶
Prerequisites¶
- irictl-volume
- locally running or
- https://github.com/ironcore-dev/ironcore/pkgs/container/ironcore-irictl-volume
- irictl-bucket
- locally running or
- https://github.com/ironcore-dev/ironcore/pkgs/container/ironcore-irictl-bucket
Listing supported VolumeClass
¶
irictl-volume --address=unix:./iri-volume.sock get volumeclass
Name TPS IOPS
experimental 262144000 15000
Listing supported VolumeClass
¶
irictl-volume --address=unix:./iri-volume.sock get volumeclass
Name TPS IOPS
experimental 262144000 15000
Creating a Volume
¶
irictl-volume --address=unix:./iri-volume.sock create volume -f ./volume.json
Created volume 796264618065bb31024ec509d4ed8a87ed098ee8e89b370c06b0522ba4bf1e2
Sample volume.json
{
"metadata": {
"labels": {
"test.api.ironcore.dev/volume-name": "test"
}
},
"spec": {
"class": "experimental",
"resources": {
"storage_bytes": 10070703360
}
}
}
Listing Volume
s¶
irictl-volume --address=unix:./iri-volume.sock get volume
ID Class Image State Age
796264618065bb31024ec509d4ed8a87ed098ee8e89b370c06b0522ba4bf1e2 experimental VOLUME_AVAILABLE 2s
Deleting a Volume
s¶
irictl-volume --address=unix:./iri-volume.sock delete volume 796264618065bb31024ec509d4ed8a87ed098ee8e89b370c06b0522ba4bf1e2
Volume 796264618065bb31024ec509d4ed8a87ed098ee8e89b370c06b0522ba4bf1e2 deleted