Knative Eventing是一个旨在满足云原生开发的常见需求的系统,并提供可组合的原语以启用后期绑定事件源和事件使用者。具备以下设计目标:

  1. 服务在开发期间松散耦合并且在不同平台(kubernetes,虚拟机,SaaS 或者 PaaS)上独立部署
  2. 生产者可以在消费者收听之前生成事件,并且消费者可以表达对尚未生成的事件或事件类别的兴趣。
  3. 可以连接服务以创建新的应用程序
  • 无需修改生产者或消费者
  • 能够从特定的生产者中选择特定的事件子集。

4. 确保跨服务的互操作性。与CloudEvents的设计目标是一致的,CloudEvents是由CNCF serverless工作组开发的跨服务互操作性的通用规范。

架构

目前 Knative Eventing 主要有三种使用模式:

  • Source to Service

从源直接传递到单个服务(可寻址端点,包括Knative服务或核心Kubernetes服务)。在这种情况下,如果目标服务不可用,则源负责重试或排队事件。

  • Channels 和 Subscriptions

通过渠道和订阅,Knative事件系统定义了一个渠道,该渠道可以连接到各种后端(例如内存中,Kafka和GCP PubSub)来sourcing事件。每个通道可以具有一个或多个以Sink Services形式的订阅用户,如图,该订阅用户可以接收事件消息并根据需要对其进行处理。通道中的每个消息都将格式化为CloudEvent,并在链中进一步发送给其他订阅用户以进行进一步处理。通道和订阅使用模式无法过滤消息。

  • Brokers 和 Triggers

Broker提供了一系列事件,可以通过属性选择事件。它接收事件并将其转发给由一个或多个匹配Trigger定义的订阅用户。

Trigger描述了事件属性的过滤器,应将其传递给可寻址对象。您可以根据需要创建任意数量的Trigger。

更高级别的事件构造

在某些情况下,您可能希望一起使用一组协作功能,对于这些用例,Knative Eventing提供了两个附加资源:

  • Sequence 提供一种定义功能的有序列表的方法。
  • Parallel 提供了一种定义事件分支列表的方法。

Source(源)

Source 是事件的来源,它是我们定义事件在何处生成以及如何将事件传递给关注对象的方式。例如,Knative 团队开发了许多开箱即用的源。举几个例子:

  • GCP PubSub
    订阅谷歌云消息发布订阅服务中的主题并监听消息。
  • Kubernetes Event
    Kubernetes 集群中发生的所有事件的反馈。
  • GitHub
    监视 GitHub 存储库中的事件,例如版本的拉取,推送和创建发布等。
  • Container Source
    如果你需要创建自己的事件源,Knative 还有一个抽象叫容器源,允许你轻松创建自定义的事件源,并打包为容器。详细内容请阅读“构建自定义事件源”章节。

这个列表只是整个事件的一部分,但整个事件清单在不断的快速增长。你可以在 Knative 事件文档中的 Knative 生态系统部分查看事件源的当前列表。

安装

1:安装CRD

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/eventing-crds.yaml

可以看到如下输出:

customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev created
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev created
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev created
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev created

2:安装Eventing核心组件

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/eventing-core.yaml

可以看到如下输出:

namespace/knative-eventing created
serviceaccount/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller created
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator created
serviceaccount/pingsource-jobrunner created
clusterrolebinding.rbac.authorization.k8s.io/pingsource-jobrunner created
serviceaccount/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver created
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding created
configmap/config-br-default-channel created
configmap/config-br-defaults created
configmap/default-ch-webhook created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-tracing created
deployment.apps/eventing-controller created
deployment.apps/eventing-webhook created
service/eventing-webhook created
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver created
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/messaging-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter created
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress created
clusterrole.rbac.authorization.k8s.io/eventing-config-reader created
clusterrole.rbac.authorization.k8s.io/channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit created
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view created
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-jobrunner created
clusterrole.rbac.authorization.k8s.io/podspecable-binding created
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding created
clusterrole.rbac.authorization.k8s.io/source-observer created
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer created
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev created
secret/eventing-webhook-certs created
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev created

3:安装Channel(消息)层

此处Knative支持Apache Kafka,Google Cloud Pub/Sub,NATS,In-Memory几种Channel。

此处为了演示,我们使用 In-Memory,该实现非常好,因为它既简单又独立,但是不适合生产用例。生产环境,建议大家使用其他几种。

以下命令安装在内存中运行的Channel的实现。

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/in-memory-channel.yaml

有如下输出:

configmap/config-imc-event-dispatcher created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
service/imc-dispatcher created
serviceaccount/imc-dispatcher created
serviceaccount/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created
deployment.apps/imc-controller created
deployment.apps/imc-dispatcher created

4: 安装Broker (消息)层

一旦确定了要使用的Channel并安装了它们,就可以通过控制使用哪个Channel来配置Broker。您可以通过名称空间或特定的Broker将其选择为集群级别的默认值。这些由knative-eventing命名空间中的config-br-defaults ConfigMap配置。

此处支持基于Channel和基于MT-Channel(多租户)两种。

基于MT-Channel(多租户)的Broker即Knative提供的用来进行事件路由的多租户Broker实现。

以下命令将安装使用基于Channel的Broker实现:

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/channel-broker.yaml

有如下输出

configmap/config-imc-event-dispatcher created
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver created
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator created
clusterrole.rbac.authorization.k8s.io/imc-controller created
clusterrole.rbac.authorization.k8s.io/imc-dispatcher created
service/imc-dispatcher created
serviceaccount/imc-dispatcher created
serviceaccount/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-controller created
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher created
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev created
deployment.apps/imc-controller created
deployment.apps/imc-dispatcher created

要自定义使用哪种Broker channel实现,请更新以下ConfigMap以指定将哪些配置用于哪些名称空间:

apiVersion: v1
data:
  default-br-config: |
    clusterDefault:
      brokerClass: ChannelBasedBroker
      apiVersion: v1
      kind: ConfigMap
      name: config-br-default-channel
      namespace: knative-eventing
kind: ConfigMap
metadata:
  annotations:
  labels:
    eventing.knative.dev/release: v0.14.0
  name: config-br-defaults
  namespace: knative-eventing

5: 监视Knative组件,直到所有组件都显示“runing”状态:

kubectl get pods --namespace knative-eventing
NAME                                   READY   STATUS    RESTARTS   AGE
eventing-controller-5d866849fd-xm4lz   1/1     Running   0          157m
eventing-webhook-59489cddcf-ncvr4      1/1     Running   0          157m
imc-controller-76d5bfd958-857x6        1/1     Running   0          47m
imc-dispatcher-6bd7c74d7d-pvh8b        1/1     Running   0          47m

6: 可选扩展

此处支持以下扩展,大家可以选择安装:

Knative Hello World

在上面的安装中,是一个基本安装。该demo需要更多的资源,所以为了让demo能够正常运行,需要执行下面的命令:

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.14.0/eventing.yaml

namespace/knative-eventing unchanged
serviceaccount/eventing-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-resolver unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-source-observer unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-sources-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-controller-manipulator unchanged
serviceaccount/pingsource-jobrunner unchanged
clusterrolebinding.rbac.authorization.k8s.io/pingsource-jobrunner unchanged
serviceaccount/eventing-webhook unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-resolver unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-webhook-podspecable-binding unchanged
configmap/config-br-default-channel unchanged
configmap/config-br-defaults unchanged
configmap/default-ch-webhook unchanged
configmap/config-leader-election unchanged
configmap/config-logging unchanged
configmap/config-observability unchanged
configmap/config-tracing unchanged
deployment.apps/eventing-controller unchanged
deployment.apps/eventing-webhook unchanged
service/eventing-webhook unchanged
customresourcedefinition.apiextensions.k8s.io/apiserversources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/brokers.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/channels.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/containersources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/eventtypes.eventing.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/parallels.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pingsources.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sequences.flows.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/sinkbindings.sources.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/subscriptions.messaging.knative.dev unchanged
customresourcedefinition.apiextensions.k8s.io/triggers.eventing.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/addressable-resolver configured
clusterrole.rbac.authorization.k8s.io/service-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/serving-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/channel-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/broker-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/messaging-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/flows-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/eventing-broker-filter unchanged
clusterrole.rbac.authorization.k8s.io/eventing-broker-ingress unchanged
clusterrole.rbac.authorization.k8s.io/eventing-config-reader unchanged
clusterrole.rbac.authorization.k8s.io/channelable-manipulator configured
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-messaging-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-flows-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-sources-namespaced-admin unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-edit unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-namespaced-view unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-jobrunner unchanged
clusterrole.rbac.authorization.k8s.io/podspecable-binding configured
clusterrole.rbac.authorization.k8s.io/builtin-podspecable-binding unchanged
clusterrole.rbac.authorization.k8s.io/source-observer configured
clusterrole.rbac.authorization.k8s.io/eventing-sources-source-observer unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-sources-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-webhook unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.eventing.knative.dev unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.eventing.knative.dev unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.eventing.knative.dev unchanged
secret/eventing-webhook-certs unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/sinkbindings.webhook.sources.knative.dev unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-channel-broker-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-channel-broker-controller created
customresourcedefinition.apiextensions.k8s.io/configmappropagations.configs.internal.knative.dev created
deployment.apps/broker-controller created
deployment.apps/broker-controller configured
customresourcedefinition.apiextensions.k8s.io/configmappropagations.configs.internal.knative.dev unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-channel-broker-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
serviceaccount/mt-broker-filter created
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
serviceaccount/mt-broker-ingress created
clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter created
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress created
deployment.apps/broker-filter created
service/broker-filter created
deployment.apps/broker-ingress created
service/broker-ingress created
deployment.apps/mt-broker-controller created
deployment.apps/broker-filter unchanged
service/broker-filter unchanged
deployment.apps/broker-ingress unchanged
service/broker-ingress unchanged
deployment.apps/mt-broker-controller unchanged
horizontalpodautoscaler.autoscaling/broker-ingress-hpa created
horizontalpodautoscaler.autoscaling/broker-filter-hpa created
horizontalpodautoscaler.autoscaling/broker-ingress-hpa unchanged
horizontalpodautoscaler.autoscaling/broker-filter-hpa unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-channel-broker-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/eventing-mt-channel-broker-controller unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter unchanged
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-filter unchanged
serviceaccount/mt-broker-filter unchanged
clusterrole.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress unchanged
clusterrolebinding.rbac.authorization.k8s.io/knative-eventing-mt-broker-ingress unchanged
serviceaccount/mt-broker-ingress unchanged
configmap/config-imc-event-dispatcher unchanged
clusterrole.rbac.authorization.k8s.io/imc-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/imc-channelable-manipulator unchanged
clusterrole.rbac.authorization.k8s.io/imc-controller unchanged
clusterrole.rbac.authorization.k8s.io/imc-dispatcher unchanged
service/imc-dispatcher unchanged
serviceaccount/imc-dispatcher unchanged
serviceaccount/imc-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/imc-controller unchanged
clusterrolebinding.rbac.authorization.k8s.io/imc-dispatcher unchanged
customresourcedefinition.apiextensions.k8s.io/inmemorychannels.messaging.knative.dev unchanged
deployment.apps/imc-controller unchanged
deployment.apps/imc-dispatcher configured

在开始管理事件之前,您需要创建传输事件所需的对象。

安装Knative 数据源

创建并配置 Eventing 命名空间

在本节中,您将创建事件示例名称空间,然后将knative-eventing-injection标签添加到该名称空间。您可以使用名称空间将它们组合在一起并组织您的Knative资源,包括Eventing子组件。

1:运行以下命令以创建一个名为event-example的名称空间:

kubectl create namespace event-example

2: 使用以下命令将标签添加到您的名称空间:

kubectl label namespace event-example knative-eventing-injection=enabled

namespace/event-example labeled

这为事件示例名称空间提供了knative-eventing-injection标签,该标签添加了可用于管理事件的资源。

在下一节中,您将需要验证在本节中添加的资源是否正在正确运行。然后,您可以创建管理事件所需的其余事件资源。

检查 Broker 运行状况

代理确保事件产生者发送的每个事件都到达正确的事件使用者。当您将名称空间标记为可以进行事件准备时,将创建Broker,但是确保Broker正常运行很重要。在本指南中,您将使用默认Broker。

1:运行以下命令以验证Broker处于健康状态:

kubectl --namespace event-example get Broker default

这显示您创建的Broker:

NAME      READY   REASON   URL                                                     AGE
default   True             http://default-broker.event-example.svc.cluster.local   64s

当Broker处于READY = True状态时,它可以开始管理收到的任何事件。

2: 如果READY = False,请等待2分钟,然后重新运行命令。如果继续收到READY = False,请参阅《调试指南》以帮助解决问题。

现在,您的Broker已准备好管理事件,您可以创建和配置事件生产者和使用者。

创建 event 使用者

您的事件使用者将收到事件产生者发送的事件。在此步骤中,您将创建两个事件使用者,即hello-displaygoodbye-display,以演示如何配置事件生产者以选择性地针对特定使用者。

1: 要将hello-display使用者部署到您的集群,请运行以下命令:

kubectl --namespace event-example apply --filename - << END
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-display
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: hello-display
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: event-display
          # Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display
          image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display

---

# Service pointing at the previous Deployment. This will be the target for event
# consumption.
  kind: Service
  apiVersion: v1
  metadata:
    name: hello-display
  spec:
    selector:
      app: hello-display
    ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
END

2: 要将goodbye-display使用者部署到您的集群,请运行以下命令:

kubectl --namespace event-example apply --filename - << END
apiVersion: apps/v1
kind: Deployment
metadata:
  name: goodbye-display
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: goodbye-display
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: event-display
          # Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display
          image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display

---

# Service pointing at the previous Deployment. This will be the target for event
# consumption.
kind: Service
apiVersion: v1
metadata:
  name: goodbye-display
spec:
  selector:
    app: goodbye-display
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
END

3: 就像使用Broker一样,通过运行以下命令来验证事件使用者是否正在工作:

kubectl --namespace event-example get deployments hello-display goodbye-display

有如下输出:

NAME              READY   UP-TO-DATE   AVAILABLE   AGE
hello-display     1/1     1            1           88s
goodbye-display   1/1     1            1           36s

DESIRED列中的副本数应与AVAILABLE列中的副本数匹配,这可能需要几分钟。如果两分钟后数字不匹配,请参阅《调试指南》以帮助解决问题。

创建Triggers

Triggers定义了您希望每个事件使用者接收的事件。您的Broker使用Triggers将事件转发给合适的使用者。每个Triggers都可以指定一个过滤器,以根据Cloud Event上下文属性选择相关事件。

1: 要创建第一个triggers,请运行以下命令:

kubectl --namespace event-example apply --filename - << END
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
  name: hello-display
spec:
  filter:
    attributes:
      type: greeting
  subscriber:
    ref:
     apiVersion: v1
     kind: Service
     name: hello-display
END

该命令创建一个triggers,该triggers将所有类型为greeting的事件发送到名为hello-display的事件使用者。

2: 要创建第二个triggers,请运行以下命令:

kubectl --namespace event-example apply --filename - << END
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
  name: goodbye-display
spec:
  filter:
    attributes:
      source: sendoff
  subscriber:
    ref:
     apiVersion: v1
     kind: Service
     name: goodbye-display
END

该命令创建一个triggers,该triggers将源发送的所有事件发送到名为goodbye-display的事件使用者。

3:通过运行以下命令来验证triggers是否正常工作:

kubectl --namespace event-example get triggers

这将返回您创建的hello-displaygoodbye-displaytriggers:

NAME              READY   REASON   BROKER    SUBSCRIBER_URI                                            AGE
goodbye-display   True             default   http://goodbye-display.event-example.svc.cluster.local/   109s
hello-display     True             default   http://hello-display.event-example.svc.cluster.local/     3m

如果正确配置了triggers,它们将准备就绪,并指向正确的Broker(默认Broker)和SUBSCRIBER_URI(triggerName.namespaceName.svc.cluster.local)。如果不是这种情况,请参阅《调试指南》以帮助解决问题。

现在,您已经创建了接收和管理事件所需的所有资源。您创建了Broker,该Broker通过triggers管理发送给事件使用者的事件。在下一节中,您将创建事件生产者,该事件生产者将用于创建事件。

创建 event 生产者

在本节中,您将创建一个事件生成器,可用于与之前创建的Knative Eventing子组件进行交互。大多数事件是系统创建的,但是本指南使用curl手动发送单个事件,并演示正确的事件使用者如何接收这些事件。因为您只能从Eventing集群中访问Broker,所以必须在该集群中创建Pod才能充当事件生产者。

在下一步中,您将创建一个Pod,该Pod执行curl命令将事件发送到Eventing集群中的Broker。

运行下面的命令,创建一个Pod:

kubectl --namespace event-example apply --filename - << END
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: curl
  name: curl
spec:
  containers:
    # This could be any image that we can SSH into and has curl.
  - image: radial/busyboxplus:curl
    imagePullPolicy: IfNotPresent
    name: curl
    resources: {}
    stdin: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    tty: true
END

现在,您已经设置了Eventing集群来发送和使用事件,接下来将使用HTTP请求来手动发送单独的事件,并在下一部分中演示每个事件如何针对您的单个事件使用者。

发送 Events 到 Broker

现在,您已经创建了Pod,可以通过将HTTP请求发送到Broker来创建事件。通过运行以下命令,将SSH SSH到Pod:

kubectl --namespace event-example attach curl -it

您已经进入Pod,现在可以发出HTTP请求。将会出现类似于以下内容的提示:

    Defaulting container name to curl.
    Use 'kubectl describe pod/ -n event-example' to see all of the containers in this pod.
    If you don't see a command prompt, try pressing enter.
    [ root@curl:/ ]$
 

为了显示您可以发送的各种事件,您将提出三个请求:

1:要发出第一个请求,该请求创建的事件的类型为greeting,请在SSH终端中运行以下命令:

curl -v "http://default-broker.event-example.svc.cluster.local" \
  -X POST \
  -H "Ce-Id: say-hello" \
  -H "Ce-Specversion: 0.3" \
  -H "Ce-Type: greeting" \
  -H "Ce-Source: not-sendoff" \
  -H "Content-Type: application/json" \
  -d '{"msg":"Hello Knative!"}'

当Broker收到您的事件时,hello-display将被激活并将其发送给同名的事件使用者。

如果事件已收到,您将收到类似于以下内容的202接受响应:

< HTTP/1.1 202 Accepted
< Content-Length: 0
< Date: Thu, 14 May 2020 10:16:43 GMT

2:要发出第二个请求,该请求创建一个具有源发送事件的事件,请在SSH终端中运行以下命令:

curl -v "http://default-broker.event-example.svc.cluster.local" \
  -X POST \
  -H "Ce-Id: say-goodbye" \
  -H "Ce-Specversion: 0.3" \
  -H "Ce-Type: not-greeting" \
  -H "Ce-Source: sendoff" \
  -H "Content-Type: application/json" \
  -d '{"msg":"Goodbye Knative!"}'

当Broker收到您的事件时,告别显示将激活并将事件发送给同名事件消费者。

如果事件已收到,您将收到类似于以下内容的202接受响应:

< HTTP/1.1 202 Accepted
< Content-Length: 0
< Date: Thu, 14 May 2020 10:18:04 GMT

3:要发出第三个请求,该请求创建的事件的类型为greeting和source sendoff,请在SSH终端中运行以下命令:

curl -v "http://default-broker.event-example.svc.cluster.local" \
  -X POST \
  -H "Ce-Id: say-hello-goodbye" \
  -H "Ce-Specversion: 0.3" \
  -H "Ce-Type: greeting" \
  -H "Ce-Source: sendoff" \
  -H "Content-Type: application/json" \
  -d '{"msg":"Hello Knative! Goodbye Knative!"}'

当Broker收到您的事件时,hello-display和goodbye-display将被激活并将事件发送给同名的事件使用者。

如果事件已收到,您将收到类似于以下内容的202接受响应:

< HTTP/1.1 202 Accepted
< Content-Length: 0
< Date: Thu, 14 May 2020 10:19:13 GMT

4:通过在命令提示符下键入exit退出SSH。

您已经向hello-display事件使用者发送了两个事件,并向再见-显示事件使用者发送了两个事件(请注意,say-hello-goodbye激活了hello-display和goodbye-display的触发条件)。您将在下一部分中验证是否正确接收了这些事件。

检查接受到的 events

发送事件后,请验证适当的Subscribers是否已收到您的事件。

1: 通过运行以下命令查看hello-display事件使用者的日志:

kubectl --namespace event-example logs -l app=hello-display --tail=100

这将返回发送到hello-display的事件的属性和数据:

☁️  cloudevents.Event
Validation: valid
Context Attributes,
  specversion: 0.3
  type: greeting
  source: not-sendoff
  id: say-hello
  time: 2020-05-14T10:16:43.273679173Z
  datacontenttype: application/json
Extensions,
  knativearrivaltime: 2020-05-14T10:16:43.273622736Z
  knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
  traceparent: 00-9290fc892758739bcaddf3b18863c5ec-bff2f00567d9e675-00
Data,
  {
    "msg": "Hello Knative!"
  }
☁️  cloudevents.Event
Validation: valid
Context Attributes,
  specversion: 0.3
  type: greeting
  source: sendoff
  id: say-hello-goodbye
  time: 2020-05-14T10:19:13.68289106Z
  datacontenttype: application/json
Extensions,
  knativearrivaltime: 2020-05-14T10:19:13.682844758Z
  knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
  traceparent: 00-ec47a6944893a7aeea50f449a48ecc47-7908f8cb59970c45-00
Data,
  {
    "msg": "Hello Knative! Goodbye Knative!"
  }

2:通过运行以下命令查看goodbye-display事件使用者的日志:

kubectl --namespace event-example logs -l app=goodbye-display --tail=100

这将返回发送到goodbye-display的事件的属性和数据:

  cloudevents.Event
Validation: valid
Context Attributes,
  specversion: 0.3
  type: not-greeting
  source: sendoff
  id: say-goodbye
  time: 2020-05-14T10:18:04.583499078Z
  datacontenttype: application/json
Extensions,
  knativearrivaltime: 2020-05-14T10:18:04.583452664Z
  knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
  traceparent: 00-fa9715c616db95172fe0bbcacb5cf3b7-7ee4dd9a3256ec64-00
Data,
  {
    "msg": "Goodbye Knative!"
  }
☁️  cloudevents.Event
Validation: valid
Context Attributes,
  specversion: 0.3
  type: greeting
  source: sendoff
  id: say-hello-goodbye
  time: 2020-05-14T10:19:13.68289106Z
  datacontenttype: application/json
Extensions,
  knativearrivaltime: 2020-05-14T10:19:13.682844758Z
  knativehistory: default-kne-trigger-kn-channel.event-example.svc.cluster.local
  traceparent: 00-ec47a6944893a7aeea50f449a48ecc47-b53a2568b55b2736-00
Data,
  {
    "msg": "Hello Knative! Goodbye Knative!"
  }

iyacontrol
1.4k 声望2.7k 粉丝

专注kubernetes,devops,aiops,service mesh。