Do I have to install opencollective? #334
Replies: 9 comments 8 replies
-
Hello @YannGarcia, I don't think you do. Could you provide version information for your tools, I suspect that you have a version installed on your system that is not supported. Here's the page with tool versioning... |
Beta Was this translation helpful? Give feedback.
-
Hello Mike,
Here are the tool versioning I'm using:
***@***.***:~$ node -v
v10.16.3
***@***.***:~$ npm -v
6.9.0
***@***.***:~$ go version
go version go1.17.4 linux/amd64
***@***.***:~$ eslint -v
v5.16.0
***@***.***:~$ golangci-lint --version
golangci-lint has version v1.18.0 built from (unknown, mod sum:
"h1:XmQgfcLofSG/6AsQuQqmLizB+3GggD+o6ObBG9L+VMM=") on (unknown)
***@***.***:~$ uname --all
Linux FSCOM-MEC 5.11.0-41-generic #45~20.04.1-Ubuntu SMP Wed Nov 10
10:20:10 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
***@***.***:~$
Currently, I was able to start AdvantEDGE and to access the GUI.
Unfortunately, I have a red status due to meep-prometheus:
***@***.***:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY
STATUS RESTARTS AGE
default meep-auth-svc-7967bfd885-dnxq8 1/1
Running 0 6m29s
default meep-couchdb-couchdb-0 1/1
Running 0 7m24s
default meep-docker-registry-65b77797cb-9vnjb 1/1
Running 0 7m22s
default meep-grafana-5786d54797-49q4m 1/1
Running 0 7m19s
default meep-influxdb-0 1/1
Running 0 7m16s
default meep-ingress-controller-8pjzm 1/1
Running 0 7m14s
default meep-ingress-defaultbackend-5c57d5cd58-k8fd2 1/1
Running 0 7m14s
default meep-kube-state-metrics-868576f6d4-6zrn7 1/1
Running 0 7m11s
default meep-mon-engine-57d5ff7974-8bn7n 1/1
Running 0 6m25s
default meep-open-map-tiles-7d99b886f-nwmjj 1/1
Running 0 7m6s
default meep-platform-ctrl-55dff5cf77-dx4zb 1/1
Running 0 6m23s
default meep-postgis-0 2/2
Running 0 7m2s
*default meep-prometheus-couchdb-exporter-795d6b6dc5-ls9z8 0/1
ImagePullBackOff 0 6m45s*
*default meep-prometheus-node-exporter-v9spf 0/1
ImagePullBackOff 0 6m45s*
*default meep-prometheus-operator-c8b8896d7-2k9z8 0/1
ImagePullBackOff 0 6m45s*
default meep-redis-master-0 2/2
Running 0 6m37s
default meep-redis-slave-0 2/2
Running 0 6m37s
default meep-virt-engine-65484699c6-9jw7t 1/1
Running 0 6m20s
default meep-webhook-6865678784-bx8hq 1/1
Running 0 6m16s
default pvc-deleter-job-8gdsn 0/1
Completed 0 9h
kube-system coredns-f9fd979d6-5db85 1/1
Running 7 26h
kube-system coredns-f9fd979d6-skpv2 1/1
Running 7 26h
kube-system etcd-fscom-mec 1/1
Running 10 26h
kube-system kube-apiserver-fscom-mec 1/1
Running 10 26h
kube-system kube-controller-manager-fscom-mec 1/1
Running 11 26h
kube-system kube-proxy-dft4b 1/1
Running 10 26h
kube-system kube-scheduler-fscom-mec 1/1
Running 12 26h
kube-system weave-net-cchqt 2/2
Running 22 26h
Another question: where can I find the demo-geo-sbox sample?
Many thanks for your support,
Best regards,
Yann Garcia
|
Beta Was this translation helpful? Give feedback.
-
Hello Mike,
Humm, things changed during the night :(
Now prometheus pods work fine but there are some issues with the demo1.
First of all, here is the output of meepctl version all:
***@***.***:~$ meepctl version all
Using repo config file: /home/yann/AdvantEDGE/.meepctl-repocfg.yaml
Using meepctl config file: /home/yann/.meepctl.yaml
{
"name": "meepctl",
"version": "1.8.0"
}
{
"name": ".meepctl-repocfg.yaml",
"version": "1.8.0"
}
{
"name": "meep-cert-manager",
"version": "NA"
}
{
"name": "couchdb",
"version": "couchdb:3.1.0",
"id": "b604d056d8024f10346eab768de7aea06bc0a1b7c55d6087e1b1cd4328c8061c"
}
{
"name": "docker-registry",
"version": "registry:2.7.1",
"id": "169211e20e2f2d5d115674681eb79d21a217b296b43374b8e39f97fcf866b375"
}
{
"name": "grafana",
"version": "grafana/grafana:7.3.5",
"id": "511bc20bfcd1b79f3947bb1c33d152f7484e7a91418883fb4dddf71274227321"
}
{
"name": "meep-influxdb",
"version": "influxdb:1.8.0-alpine",
"id": "5eca9dfe9930a3325323cef801827eb1b0940070465f8f215447b8e732c72b34"
}
{
"name": "meep-ingress",
"version": "NA"
}
{
"name": "kube-state-metrics",
"version": "quay.io/coreos/kube-state-metrics:v1.9.7",
"id": "2f82f0da199c60a7699c43c63a295c44e673242de0b7ee1b17c2d5a23bec34cb"
}
{
"name": "meep-minio",
"version": "NA"
}
{
"name": "meep-open-map-tiles",
"version": "NA"
}
{
"name": "meep-postgis",
"version": "postgis/postgis:12-3.0",
"id": "71acda16357f2973034483a4a8363cc9499061120b592bcc3b7f2fbed82da621"
}
{
"name": "meep-prometheus",
"version": "NA"
}
{
"name": "meep-redis",
"version": "NA"
}
{
"name": "meep-thanos",
"version": "NA"
}
{
"name": "meep-thanos-archive",
"version": "NA"
}
{
"name": "helm",
"version": "v3.3.4",
"id": "a61ce5633af99708171414353ed49547cf05013d"
}
{
"name": "docker client",
"version": "20.10.11",
"id": "dea9396"
}
{
"name": "docker server",
"id": "go1.16.9"
}
{
"name": "k8s client",
"version": "v1.19.1",
"id": "206bcadf021e76c27513500ca24182692aabd17e"
}
{
"name": "k8s server",
"version": "v1.19.16",
"id": "e37e4ab4cc8dcda84f1344dda47a97bb1927d074"
}
{
"name": "weave",
"version": "ghcr.io/weaveworks/launcher/weave-kube:2.8.1",
"id": "d797338e7beb17222e10757b71400d8471bdbd9be13b5da38ce2ebf597fb4e63"
}
{
"name": "meep-auth-svc",
"version": "meep-docker-registry:30001/meep-auth-svc:latest",
"id": "39e0d05c7c584bc95f7fa4673dabeda290e67cc82898059f52a340cba5527ddf",
"build": "sha256sum:"
}
{
"name": "meep-ingress-certs",
"version": "NA"
}
{
"name": "meep-mon-engine",
"version": "meep-docker-registry:30001/meep-mon-engine:latest",
"id": "f9b679aff6c8a08330508a06594300b7563ea57455d33f85fc58ef6d40c34144",
"build": "sha256sum:"
}
{
"name": "meep-platform-ctrl",
"version": "meep-docker-registry:30001/meep-platform-ctrl:latest",
"id": "20cb68f81bb076381af838a7aa5c44b0cf0c469a9e58ef6b0c14b0283dc8c8b6",
"build": "sha256sum:"
}
{
"name": "meep-virt-engine",
"version": "meep-docker-registry:30001/meep-virt-engine:latest",
"id": "96fa024cc0f0ec4e2ae09b25615720a8f319286df0b99991de09026c569caee3",
"build":
"86ef9e9fde6a1ffda14edd65de26565af06cde1bedec06184901eab24b6f20a7"
}
{
"name": "meep-webhook",
"version": "meep-docker-registry:30001/meep-webhook:latest",
"id": "e13e9e88d88a35f0ba80b16dfcc2180e73b5740653bafdadd0af0be3bbc15acc",
"build":
"085599cbaf2b71e98d36f6c4c366e6f24fb8301bd4677728885e5388702b3756"
}
Here is the output of kubectl get pods --all-namespaces | grep meep:
***@***.***:~$ kubectl get pods --all-namespaces | grep meep
default meep-auth-svc-7967bfd885-mzrj2 1/1
Running 0 37m
default meep-couchdb-couchdb-0 1/1
Running 0 39m
default meep-docker-registry-65b77797cb-nhn7l 1/1
Running 0 39m
default meep-grafana-5786d54797-jchs6 1/1
Running 0 39m
default meep-influxdb-0 1/1
Running 0 38m
default meep-ingress-controller-cbx49 1/1
Running 0 38m
default meep-ingress-defaultbackend-5c57d5cd58-mng7l 1/1
Running 0 38m
default meep-kube-state-metrics-868576f6d4-46fjk 1/1
Running 0 38m
default meep-mon-engine-57d5ff7974-8jm48 1/1
Running 0 37m
default meep-open-map-tiles-7d99b886f-bz5jc 1/1
Running 0 38m
default meep-platform-ctrl-55dff5cf77-kq2g6 1/1
Running 0 37m
default meep-postgis-0 2/2
Running 0 38m
default meep-prometheus-couchdb-exporter-795d6b6dc5-8zskn 1/1
Running 0 38m
default meep-prometheus-node-exporter-q49bk 1/1
Running 0 38m
default meep-prometheus-operator-c8b8896d7-947n4 1/1
Running 0 38m
default meep-redis-master-0 2/2
Running 0 37m
default meep-redis-slave-0 2/2
Running 0 37m
default meep-virt-engine-65484699c6-cmjkh 1/1
Running 0 37m
default meep-webhook-6865678784-s9hvm 1/1
Running 0 37m
default prometheus-meep-prometheus-server-0 2/2
Running 1 36m
Regarding demo1, I was able to import and save demo1 script, and to execute
it, creating a 'demo1sandbox' and I started the iperf-proxy. Unfortunately,
here are the new issues:
***@***.***:~$ kubectl get pods --all-namespaces | grep demo1
demo1sandbox meep-ams-846b4c868f-nvd76 1/1
Running 0 76s
demo1sandbox meep-app-enablement-5c8f4dbbc5-dp4kb 1/1
Running 0 71s
demo1sandbox meep-gis-engine-5974ff844c-27vhq 1/1
Running 0 85s
demo1sandbox meep-loc-serv-7f66f6689c-x7426 1/1
Running 0 82s
demo1sandbox meep-metrics-engine-6db7fbccd4-9t9p2 1/1
Running 0 68s
demo1sandbox meep-mg-manager-654b8dc78f-z4xjw 1/1
Running 0 65s
demo1sandbox meep-rnis-5f44fbf47d-xmrzs 1/1
Running 0 62s
demo1sandbox meep-sandbox-ctrl-86cdc58ccb-828gh 1/1
Running 0 58s
*demo1sandbox meep-tc-engine-7885895f48-58lmq 0/1
ImagePullBackOff 0 54s*
*demo1sandbox meep-wais-69c7c65dff-nmmk9 0/1
ErrImagePull 0 80s*
And here are the outputs kubectl describe pod for both failed pod:
***@***.***:~$ kubectl describe pod meep-tc-engine-7885895f48-58lmq
Error from server (NotFound): pods "meep-tc-engine-7885895f48-58lmq" not
found
***@***.***:~$ kubectl describe pod meep-wais-69c7c65dff-nmmk9
Error from server (NotFound): pods "meep-wais-69c7c65dff-nmmk9" not found
On the GUI, after clicking on 'Deploy' button, nothing append and the
'Status' led became red:
[image: image.png]
Thanks again for your help,
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3
…On Wed, 8 Dec 2021 at 18:31, Mike Roy ***@***.***> wrote:
OK - so the initial error you reported is not blocking you correct?
For Prometheus, we'll need to know what it is complaining about.
Could first provide these:
- meepctl version all
- kubectl describe pod <pod-name> for the Prometheus pods
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGH6BL7LAWXJJNTCAW6DLLUP6I6DANCNFSM5JTDR7CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Beta Was this translation helpful? Give feedback.
-
Hello Mike,
1) After starting my VM (4 CPUs, 11GB RAM, 64 GB HDD), I have this:
NAMESPACE NAME READY STATUS
RESTARTS AGE
default pvc-deleter-job-8gdsn 0/1 Completed 0
31h
kube-system coredns-f9fd979d6-5db85 1/1 Running 11
2d
kube-system coredns-f9fd979d6-skpv2 1/1 Running 11
2d
kube-system etcd-fscom-mec 1/1 Running 14
2d
kube-system kube-apiserver-fscom-mec 1/1 Running 14
2d
kube-system kube-controller-manager-fscom-mec 1/1 Running 15
2d
kube-system kube-proxy-dft4b 1/1 Running 14
2d
kube-system kube-scheduler-fscom-mec 1/1 Running 17
2d
kube-system weave-net-cchqt 2/2 Running 30
2d
2) After AdvantEDGE started (meepctl deploy dep && meepctl deploy core):
default meep-auth-svc-7967bfd885-r75j8 1/1
Running 0 47s
default meep-couchdb-couchdb-0 1/1
Running 0 2m
default meep-docker-registry-65b77797cb-psjtc 1/1
Running 0 117s
default meep-grafana-5786d54797-qbch2 1/1
Running 0 114s
default meep-influxdb-0 1/1
Running 0 111s
default meep-ingress-controller-frv4c 1/1
Running 0 108s
default meep-ingress-defaultbackend-5c57d5cd58-pq55m 1/1
Running 0 108s
default meep-kube-state-metrics-868576f6d4-pdnb9 1/1
Running 0 101s
default meep-mon-engine-57d5ff7974-cx2sb 1/1
Running 0 37s
default meep-open-map-tiles-7d99b886f-pcrck 1/1
Running 0 96s
default meep-platform-ctrl-55dff5cf77-vj6v2 1/1
Running 0 32s
default meep-postgis-0 2/2
Running 0 93s
default meep-prometheus-couchdb-exporter-795d6b6dc5-nx44v 0/1
Running 0 72s
default meep-prometheus-node-exporter-nrmjd 1/1
Running 0 72s
default meep-prometheus-operator-c8b8896d7-k24nb 1/1
Running 0 72s
default meep-redis-master-0 2/2
Running 0 55s
default meep-redis-slave-0 2/2
Running 0 55s
default meep-virt-engine-65484699c6-tvx7z 1/1
Running 1 27s
default meep-webhook-6865678784-68sgs 1/1
Running 0 20s
default prometheus-meep-prometheus-server-0 2/2
Running 1 64s
default pvc-deleter-job-8gdsn 0/1
Completed 0 31h
kube-system coredns-f9fd979d6-5db85 1/1
Running 11 2d
kube-system coredns-f9fd979d6-skpv2 1/1
Running 11 2d
kube-system etcd-fscom-mec 1/1
Running 14 2d
kube-system kube-apiserver-fscom-mec 1/1
Running 14 2d
kube-system kube-controller-manager-fscom-mec 1/1
Running 15 2d
kube-system kube-proxy-dft4b 1/1
Running 14 2d
kube-system kube-scheduler-fscom-mec 1/1
Running 17 2d
kube-system weave-net-cchqt 2/2
Running 30 2d
The memory and CPU state look fine (no swapping, still free memory and no
CPU overloaded):
***@***.***:~$ free
total used free shared buff/cache
available
Mem: 11186656 2747644 5563304 34580 2875708
8356624
Swap: 0 0 0
***@***.***:~$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
2 0 0 5553396 127468 2748440 0 0 1461 376 1123 2102 11 29
59 1 0
3) After importing and saving the demo1 script, everything looks fine.
4) After creating the sandbox 'demo1sandbox':
demo1sandbox meep-ams-846b4c868f-g8vsv 1/1
Running 0 23s
demo1sandbox meep-app-enablement-5c8f4dbbc5-8zw77 1/1
Running 0 34s
demo1sandbox meep-gis-engine-5974ff844c-5kjsj 1/1
Running 0 51s
demo1sandbox meep-loc-serv-7f66f6689c-dl4vb 1/1
Running 0 49s
demo1sandbox meep-metrics-engine-6db7fbccd4-ctftv 1/1
Running 0 46s
demo1sandbox meep-mg-manager-654b8dc78f-9tp77 1/1
Running 0 42s
demo1sandbox meep-rnis-5f44fbf47d-nkg8d 1/1
Running 0 31s
demo1sandbox meep-sandbox-ctrl-86cdc58ccb-frfwm 1/1
Running 0 18s
*demo1sandbox meep-tc-engine-7885895f48-dt4vz 0/1
ImagePullBackOff 0 15s*
*demo1sandbox meep-wais-69c7c65dff-5fml4 0/1
ErrImagePull 0 26s*
Unfortunately, the command 'kubectl logs <pod name> -p' does not help:
***@***.***:~$ kubectl logs meep-tc-engine-7885895f48-dt4vz -p
Error from server (NotFound): pods "meep-tc-engine-7885895f48-dt4vz" not
found
***@***.***:~$ kubectl logs meep-wais-69c7c65dff-5fml4 -p
Error from server (NotFound): pods "meep-wais-69c7c65dff-5fml4" not found
5) Regarding the VM performance, it seems to be fine:
***@***.***:~$ free
total used free shared buff/cache
available
Mem: 11186656 3328720 4768596 48568 3089340
7901824
Swap: 0 0 0
***@***.***:~$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
3 0 0 4766940 143432 2946004 0 0 782 303 1155 2112 10 26
63 1 0
Thanks a lot,
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3
…On Thu, 9 Dec 2021 at 14:22, Kevin Di Lallo ***@***.***> wrote:
Hi @YannGarcia <https://github.com/YannGarcia>,
Thank you for the logs.
Could you please provide output from the following command: kubectl
describe node
Also, what are your system hardware specifications (cpu, ram, etc.)?
For the TC Engine & WAIS pods, did they start successfully when you
deployed the scenario and then crash or were they unable to start at all?
If they crashed, you should be able to get logs from the pods after they
crash using the command: kubectl logs <pod name> -p
Please provide the last few lines of these logs to help identify the cause
of the crash.
Thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGH6BJVYBEHY7H7FVDMVXDUQCUQ7ANCNFSM5JTDR7CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Beta Was this translation helpful? Give feedback.
-
Hello Mike,
I just installed and deployed the latest version of AdvantEDGE (commit
f203f7a) and I successfully executed the
demo3 scenario.
I have just one question: it's not clear how I can interact with this demo3
scenario?
Another questions:
1) how can I access the log of (for instance) the MEC Location Service
client/server?
2) How can I find the port to access directly to the MEC Location Service?
Many thanks in advance for your help,
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3
…On Thu, 9 Dec 2021 at 15:58, Yann Garcia ***@***.***> wrote:
Hello Mike,
1) After starting my VM (4 CPUs, 11GB RAM, 64 GB HDD), I have this:
NAMESPACE NAME READY STATUS
RESTARTS AGE
default pvc-deleter-job-8gdsn 0/1 Completed 0
31h
kube-system coredns-f9fd979d6-5db85 1/1 Running 11
2d
kube-system coredns-f9fd979d6-skpv2 1/1 Running 11
2d
kube-system etcd-fscom-mec 1/1 Running 14
2d
kube-system kube-apiserver-fscom-mec 1/1 Running 14
2d
kube-system kube-controller-manager-fscom-mec 1/1 Running 15
2d
kube-system kube-proxy-dft4b 1/1 Running 14
2d
kube-system kube-scheduler-fscom-mec 1/1 Running 17
2d
kube-system weave-net-cchqt 2/2 Running 30
2d
2) After AdvantEDGE started (meepctl deploy dep && meepctl deploy core):
default meep-auth-svc-7967bfd885-r75j8 1/1
Running 0 47s
default meep-couchdb-couchdb-0 1/1
Running 0 2m
default meep-docker-registry-65b77797cb-psjtc 1/1
Running 0 117s
default meep-grafana-5786d54797-qbch2 1/1
Running 0 114s
default meep-influxdb-0 1/1
Running 0 111s
default meep-ingress-controller-frv4c 1/1
Running 0 108s
default meep-ingress-defaultbackend-5c57d5cd58-pq55m 1/1
Running 0 108s
default meep-kube-state-metrics-868576f6d4-pdnb9 1/1
Running 0 101s
default meep-mon-engine-57d5ff7974-cx2sb 1/1
Running 0 37s
default meep-open-map-tiles-7d99b886f-pcrck 1/1
Running 0 96s
default meep-platform-ctrl-55dff5cf77-vj6v2 1/1
Running 0 32s
default meep-postgis-0 2/2
Running 0 93s
default meep-prometheus-couchdb-exporter-795d6b6dc5-nx44v 0/1
Running 0 72s
default meep-prometheus-node-exporter-nrmjd 1/1
Running 0 72s
default meep-prometheus-operator-c8b8896d7-k24nb 1/1
Running 0 72s
default meep-redis-master-0 2/2
Running 0 55s
default meep-redis-slave-0 2/2
Running 0 55s
default meep-virt-engine-65484699c6-tvx7z 1/1
Running 1 27s
default meep-webhook-6865678784-68sgs 1/1
Running 0 20s
default prometheus-meep-prometheus-server-0 2/2
Running 1 64s
default pvc-deleter-job-8gdsn 0/1
Completed 0 31h
kube-system coredns-f9fd979d6-5db85 1/1
Running 11 2d
kube-system coredns-f9fd979d6-skpv2 1/1
Running 11 2d
kube-system etcd-fscom-mec 1/1
Running 14 2d
kube-system kube-apiserver-fscom-mec 1/1
Running 14 2d
kube-system kube-controller-manager-fscom-mec 1/1
Running 15 2d
kube-system kube-proxy-dft4b 1/1
Running 14 2d
kube-system kube-scheduler-fscom-mec 1/1
Running 17 2d
kube-system weave-net-cchqt 2/2
Running 30 2d
The memory and CPU state look fine (no swapping, still free memory and no
CPU overloaded):
***@***.***:~$ free
total used free shared buff/cache
available
Mem: 11186656 2747644 5563304 34580 2875708
8356624
Swap: 0 0 0
***@***.***:~$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
2 0 0 5553396 127468 2748440 0 0 1461 376 1123 2102 11 29
59 1 0
3) After importing and saving the demo1 script, everything looks fine.
4) After creating the sandbox 'demo1sandbox':
demo1sandbox meep-ams-846b4c868f-g8vsv 1/1
Running 0 23s
demo1sandbox meep-app-enablement-5c8f4dbbc5-8zw77 1/1
Running 0 34s
demo1sandbox meep-gis-engine-5974ff844c-5kjsj 1/1
Running 0 51s
demo1sandbox meep-loc-serv-7f66f6689c-dl4vb 1/1
Running 0 49s
demo1sandbox meep-metrics-engine-6db7fbccd4-ctftv 1/1
Running 0 46s
demo1sandbox meep-mg-manager-654b8dc78f-9tp77 1/1
Running 0 42s
demo1sandbox meep-rnis-5f44fbf47d-nkg8d 1/1
Running 0 31s
demo1sandbox meep-sandbox-ctrl-86cdc58ccb-frfwm 1/1
Running 0 18s
*demo1sandbox meep-tc-engine-7885895f48-dt4vz 0/1
ImagePullBackOff 0 15s*
*demo1sandbox meep-wais-69c7c65dff-5fml4 0/1
ErrImagePull 0 26s*
Unfortunately, the command 'kubectl logs <pod name> -p' does not help:
***@***.***:~$ kubectl logs meep-tc-engine-7885895f48-dt4vz -p
Error from server (NotFound): pods "meep-tc-engine-7885895f48-dt4vz" not
found
***@***.***:~$ kubectl logs meep-wais-69c7c65dff-5fml4 -p
Error from server (NotFound): pods "meep-wais-69c7c65dff-5fml4" not found
5) Regarding the VM performance, it seems to be fine:
***@***.***:~$ free
total used free shared buff/cache
available
Mem: 11186656 3328720 4768596 48568 3089340
7901824
Swap: 0 0 0
***@***.***:~$ vmstat
procs -----------memory---------- ---swap-- -----io---- -system--
------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id
wa st
3 0 0 4766940 143432 2946004 0 0 782 303 1155 2112 10 26
63 1 0
Thanks a lot,
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3
On Thu, 9 Dec 2021 at 14:22, Kevin Di Lallo ***@***.***>
wrote:
> Hi @YannGarcia <https://github.com/YannGarcia>,
>
> Thank you for the logs.
> Could you please provide output from the following command: kubectl
> describe node
> Also, what are your system hardware specifications (cpu, ram, etc.)?
>
> For the TC Engine & WAIS pods, did they start successfully when you
> deployed the scenario and then crash or were they unable to start at all?
> If they crashed, you should be able to get logs from the pods after they
> crash using the command: kubectl logs <pod name> -p
> Please provide the last few lines of these logs to help identify the
> cause of the crash.
>
> Thanks!
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#334 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABGH6BJVYBEHY7H7FVDMVXDUQCUQ7ANCNFSM5JTDR7CQ>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
|
Beta Was this translation helpful? Give feedback.
-
Hello Kevin,
Everything is fine: demo1 is working fine, I can see some logs on location
in the monitor view.
***@***.***:~/AdvantEDGE/examples/demo1$ kubectl get pods
--all-namespaces | grep meep
default meep-auth-svc-7967bfd885-khmls 1/1
Running 0 13m
default meep-couchdb-couchdb-0 1/1
Running 0 14m
default meep-docker-registry-65b77797cb-mq58b 1/1
Running 0 14m
default meep-grafana-5786d54797-l9qn9 1/1
Running 0 14m
default meep-influxdb-0 1/1
Running 0 14m
default meep-ingress-controller-cwgbj 1/1
Running 0 14m
default meep-ingress-defaultbackend-5c57d5cd58-ptlzc 1/1
Running 0 14m
default meep-kube-state-metrics-868576f6d4-jmlqr 1/1
Running 0 14m
default meep-mon-engine-57d5ff7974-q8f8v 1/1
Running 0 13m
default meep-open-map-tiles-7d99b886f-rhvcx 1/1
Running 0 14m
default meep-platform-ctrl-866bfdcdf6-znm6d 1/1
Running 0 13m
default meep-postgis-0 2/2
Running 0 14m
default meep-prometheus-couchdb-exporter-795d6b6dc5-lflgq 1/1
Running 0 13m
default meep-prometheus-node-exporter-rwfnn 1/1
Running 0 13m
default meep-prometheus-operator-c8b8896d7-fq6qr 1/1
Running 0 13m
default meep-redis-master-0 2/2
Running 0 13m
default meep-redis-slave-0 2/2
Running 0 13m
default meep-virt-engine-65484699c6-sqtlc 1/1
Running 0 12m
default meep-webhook-6865678784-qrczh 1/1
Running 0 12m
default prometheus-meep-prometheus-server-0 2/2
Running 1 13m
demo1sb meep-ams-6bd7c58677-6wwnr 1/1
Running 0 4m6s
demo1sb meep-app-enablement-5bb46c7c45-kxbs2 1/1
Running 0 4m19s
demo1sb meep-gis-engine-64bfd57b7-gq4cb 1/1
Running 0 4m35s
demo1sb meep-loc-serv-59d6c8d9c7-b8ln7 1/1
Running 0 4m34s
demo1sb meep-metrics-engine-77f84d7cb6-z86fm 1/1
Running 0 4m31s
demo1sb meep-mg-manager-6b8747848f-md6wf 1/1
Running 0 4m15s
demo1sb meep-rnis-5bc8f57588-bjtfr 1/1
Running 0 4m30s
demo1sb meep-sandbox-ctrl-856778fffd-kfp6d 1/1
Running 0 4m2s
demo1sb meep-tc-engine-67cbc7c57b-r82cw 1/1
Running 0 4m28s
demo1sb meep-wais-7457b97b55-qqqxp 1/1
Running 0 4m11s
But it's not clear how to see the traffic between iperf client/server and
how I can interact with the demo. Do you have any hints for me?
Thanks a lot for your help,
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3
…On Thu, 16 Dec 2021 at 18:08, Kevin Di Lallo ***@***.***> wrote:
Hi @YannGarcia <https://github.com/YannGarcia>,
Has this issue been resolved or are you still having issues with demo1?
Please provide output for kubectl describe node when you see the pod
problems.
Thanks!
—
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGH6BJXZ5S23F3VS7OC4V3URIMJHANCNFSM5JTDR7CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello Kevin,
Many thanks for your inputs.
I'm going to play with it ;)
Merry Christmas and happy new.
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3
…On Thu, 16 Dec 2021 at 18:22, Kevin Di Lallo ***@***.***> wrote:
Hi @YannGarcia <https://github.com/YannGarcia>,
WRT demo3, there is some information missing in the documentation that I
will update shortly. Once deployed, you can access the Demo3 edge
application frontend as follows:
- demo3-mep1: http://<AdvantEDGE IP address>:31111
- demo3-mep2: http://<AdvantEDGE IP address>:31112
You can use the Demo3 frontend to register/deregister the demo3
applications and to track terminal devices.
WRT Location Service, you can access the swagger UI using instructions
provided here:
-
https://interdigitalinc.github.io/AdvantEDGE/docs/overview/overview-api/#viewing-api-specification
There is no port reserved for directly access internal MEC services; this
is done via ingress path rules. You can access the location service using
the following URI basepath:
- https://<AdvantEDGE IP address>/<sandbox name>/location/v2
Finally, for the Location service logs, you can look directly at the k8s
pod logs. Optionally, there is an API dashboard in the AdvantEDGE frontend
monitoring tab dashboards. You must select your
<sandbox-name>_<scenario-name> database to view the API logs. I have not
tested this recently, however, I will give it a try and see if it there are
any issues that may require a dashboard update. Otherwise, the service logs
are always available directly in the pod.
Hope this helps & answers your questions!
Regards!
—
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGH6BL5QUEOSNNMYNTGUE3URIN6RANCNFSM5JTDR7CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Dear All,
I wish you the best for this new year.
I will start with another question:
1) I was able to access to the demo3:
[image: image.png]
2) But I failed to add an AMS device. I tried with the terminal name
(10.100.0.1) and with its MAC address.
So, how can I proceed?
Many thanks in advance,
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3
…On Thu, 16 Dec 2021 at 18:22, Kevin Di Lallo ***@***.***> wrote:
Hi @YannGarcia <https://github.com/YannGarcia>,
WRT demo3, there is some information missing in the documentation that I
will update shortly. Once deployed, you can access the Demo3 edge
application frontend as follows:
- demo3-mep1: http://<AdvantEDGE IP address>:31111
- demo3-mep2: http://<AdvantEDGE IP address>:31112
You can use the Demo3 frontend to register/deregister the demo3
applications and to track terminal devices.
WRT Location Service, you can access the swagger UI using instructions
provided here:
-
https://interdigitalinc.github.io/AdvantEDGE/docs/overview/overview-api/#viewing-api-specification
There is no port reserved for directly access internal MEC services; this
is done via ingress path rules. You can access the location service using
the following URI basepath:
- https://<AdvantEDGE IP address>/<sandbox name>/location/v2
Finally, for the Location service logs, you can look directly at the k8s
pod logs. Optionally, there is an API dashboard in the AdvantEDGE frontend
monitoring tab dashboards. You must select your
<sandbox-name>_<scenario-name> database to view the API logs. I have not
tested this recently, however, I will give it a try and see if it there are
any issues that may require a dashboard update. Otherwise, the service logs
are always available directly in the pod.
Hope this helps & answers your questions!
Regards!
—
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGH6BL5QUEOSNNMYNTGUE3URIN6RANCNFSM5JTDR7CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Good morning Kevin,
Many thanks for your help.
With the correct ingress.host, it works fine now. I can access HTTP REST
API logs aggregation and I got responses to requests such as
http://127.0.0.1/demo3sandbox/mec_service_mgmt/v1/services.
Now, I guess we are entering in demo3 development: on request such as
127.0.0.1/demo3sandbox/location/v2/ or 127.0.0.1/demo3sandbox/rni/v2/, I
get an "Hello World" response. How can I proceed now?
Many thanks again for your help,
Best regards,
Yann Garcia
Senior Software Engineer
Microsoft MCAD.net Certified
**************************************
FSCOM SARL
Le Montespan B2
6,
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
Avenue des Alpes
<https://maps.google.com/?q=6,%C2%A0+Avenue+des+Alpes&entry=gmail&source=g>
F-06600 Antibes, FRANCE
************************************************
Tel: +33 (0)4 92 94 49 08
Mobile: +33 (0)6 68 94 57 76
Email: ***@***.*** ***@***.***>*
Skype: yann.garcia
SlackID: U0263576GG3, U02T8HUD2KX
…On Mon, 10 Jan 2022 at 17:41, Kevin Di Lallo ***@***.***> wrote:
Hi @YannGarcia <https://github.com/YannGarcia>,
Happy new year as well!
I have updated the Demo3 documentation with details about accessing the
application frontend:
https://interdigitalinc.github.io/AdvantEDGE/docs/usage/usage-demo3/#using-demo3-with-advantedge
WRT your AMS issue, I believe this may be due to your AdvantEDGE
deployment configuration. The services used in Demo3 require the platform
IP address to be configured in the deployment configuration file under
ingress.host. You must replace my-platform-fqdn by your deployment IP
address and *redeploy the core AdvantEDGE pods*. This configuration value
is used by the service instances to correctly build returned URIs, which
seems to cause problems with the Demo3 application when not configured.
When set correctly, Demo3 frontend controls should work correctly.
NOTE: You should use the UE ID (i.e. 10.100.0.1) to track it.
Please let me know if this is the issue you were observing.
Regards!
—
Reply to this email directly, view it on GitHub
<#334 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGH6BJ42HS4CSQSVS4BCKLUVMD4ZANCNFSM5JTDR7CQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hello,
While exacting the following command
meepctl -v build all
, I noticed an error onopencollective
not found.Do I have to install it (npm install -g opencollective)?
Thanks a lot for your help,
Best Regards,
Yann Garcia
Beta Was this translation helpful? Give feedback.
All reactions