摘要
本文探讨了边缘计算节点中异步运行时演进对服务可观测性带来的挑战与机遇。随着异步编程模型(如asyncio、tokio)的普及,传统的同步阻塞式监控手段在追踪请求链路、诊断资源竞争和剖析任务调度时面临失效风险。我们将通过一个模拟的边缘异步服务项目,深入剖析在异步上下文中实现分布式追踪、暴露运行时指标的核心技术,并构建一个可运行的演示系统,展示如何利用OpenTelemetry等现代可观测性框架,结合自定义资源监控器,在资源受限的边缘环境中维持服务的高可观测性。
1 项目概述与设计思路
在边缘计算场景中,节点通常资源受限(CPU、内存、网络),且需要处理高并发、低延迟的请求。采用异步运行时(如Python的asyncio,Rust的tokio)是优化资源利用、提升吞吐量的关键手段。然而,异步编程模型的非阻塞、协作式多任务特性,对传统的可观测性支柱——日志(Logs)、指标(Metrics)、追踪(Traces)——提出了新的挑战:
- 追踪链路断裂: 异步任务在事件循环中切换,传统的基于线程本地存储(Thread Local Storage, TLS)的追踪ID传播机制失效,导致请求链路无法正确关联。
- 资源竞争隐形化: 并发执行的异步任务可能隐式地竞争CPU、内存或I/O资源,这种竞争在同步阻塞模型中可能表现为线程等待,而在异步模型中可能直接导致事件循环阻塞(
Event Loop Blocking),更难以通过常规指标发现。 - 任务调度不透明: 开发者对任务何时被挂起(
yield)、何时被调度的可见性降低,增加了调试复杂异步交互的难度。
本项目设计目标: 构建一个模拟的边缘计算节点服务,它使用asyncio处理多个并发任务。我们将集成OpenTelemetry来实现跨越异步边界的分布式追踪,并创建一个自定义的资源竞争监控器,用于暴露异步任务执行期间潜在的CPU和内存竞争指标。通过这个可运行的项目,直观展示如何增强异步边缘服务的可观测性。
核心设计组件:
- 异步服务模拟: 模拟数据处理、设备控制等边缘任务。
- OpenTelemetry集成: 使用
opentelemetry-api、opentelemetry-sdk和opentelemetry-instrumentation-asyncio,确保追踪上下文在异步任务间的正确传播。 - 自定义资源监控器: 实现一个
AsyncioResourceMonitor,定期采样事件循环延迟和任务状态,将其暴露为Prometheus格式的指标。 - 可视化与导出: 追踪信息导出到控制台(或Jaeger),指标通过HTTP端点暴露供Prometheus抓取。
2 项目结构树
edge-async-observability-demo/
├── config.yaml # 应用配置文件
├── requirements.txt # Python依赖文件
├── run.py # 应用主入口
├── src/ # 源代码目录
│ ├── __init__.py
│ ├── app.py # 主要应用逻辑与路由
│ ├── core/ # 核心业务逻辑模块
│ │ ├── __init__.py
│ │ ├── async_processor.py # 异步任务处理器
│ │ └── device_simulator.py # 模拟设备控制器
│ ├── observability/ # 可观测性模块
│ │ ├── __init__.py
│ │ ├── tracing_setup.py # OpenTelemetry追踪配置
│ │ └── resource_monitor.py # 自定义资源竞争监控器
│ └── utils/
│ ├── __init__.py
│ └── helpers.py # 通用辅助函数
└── tests/ # 测试目录(本文暂不展开)
└── __init__.py
3 核心代码实现
文件路径:config.yaml
app:
name: "edge-async-demo"
host: "0.0.0.0"
port: 8080
worker_count: 4 # 模拟的异步工作协程数量
observability:
tracing:
enabled: true
exporter: "console" # 可选: console, jaeger
jaeger_endpoint: "http://localhost:14268/api/traces" # 当exporter为jaeger时使用
service_name: "${app.name}"
metrics:
enabled: true
port: 9464 # Prometheus指标暴露端口
processing:
default_timeout_sec: 30
mock_processing_delay_range: [0.05, 0.2] # 模拟处理延迟范围(秒)
文件路径:requirements.txt
fastapi==0.104.1
uvicorn[standard]==0.24.0
opentelemetry-api==1.21.0
opentelemetry-sdk==1.21.0
opentelemetry-instrumentation-asyncio==0.42b0
opentelemetry-instrumentation-fastapi==0.42b0
opentelemetry-exporter-jaeger==1.21.0
opentelemetry-exporter-console==1.21.0
prometheus-client==0.19.0
pyyaml==6.0.1
文件路径:run.py
#!/usr/bin/env python3
"""
边缘异步可观测性Demo主启动脚本。
"""
import asyncio
import signal
import sys
from contextlib import asynccontextmanager
import uvicorn
from fastapi import FastAPI
from src.app import router as api_router
from src.observability.tracing_setup import setup_tracing, shutdown_tracing
from src.observability.resource_monitor import start_resource_monitor, stop_resource_monitor
from src.utils.helpers import load_config
config = load_config()
@asynccontextmanager
async def lifespan(app: FastAPI):
"""管理应用生命周期:启动和关闭可观测性组件。"""
# 启动阶段
print("Starting observability components...")
setup_tracing(config)
# 启动资源监控器(一个后台异步任务)
monitor_task = asyncio.create_task(start_resource_monitor(config))
app.state.resource_monitor_task = monitor_task
yield
# 关闭阶段
print("Shutting down observability components...")
monitor_task.cancel()
try:
await monitor_task
except asyncio.CancelledError:
pass
await stop_resource_monitor()
await shutdown_tracing()
def create_app() -> FastAPI:
"""创建并配置FastAPI应用实例。"""
app = FastAPI(
title=config['app']['name'],
lifespan=lifespan,
# 禁用默认的metrics,使用我们自定义的
docs_url="/docs",
redoc_url=None,
)
app.include_router(api_router)
return app
def sigterm_handler(signum, frame):
"""优雅退出处理器。"""
print(f"Received signal {signum}, shutting down...")
sys.exit(0)
if __name__ == "__main__":
# 注册信号处理器
signal.signal(signal.SIGTERM, sigterm_handler)
signal.signal(signal.SIGINT, sigterm_handler)
app_instance = create_app()
uvicorn.run(
app_instance,
host=config['app']['host'],
port=config['app']['port'],
log_config=None, # 使用uvicorn默认日志,避免与OpenTelemetry日志干扰
)
文件路径:src/utils/helpers.py
"""
通用辅助函数。
"""
import yaml
import random
import asyncio
from typing import Any, Dict, List, Tuple
def load_config(config_path: str = "config.yaml") -> Dict[str, Any]:
"""加载YAML配置文件。"""
with open(config_path, 'r') as f:
config = yaml.safe_load(f)
return config
async def random_sleep(delay_range: List[float]) -> None:
"""在给定范围内随机休眠一段时间,模拟I/O或处理延迟。"""
delay = random.uniform(*delay_range)
await asyncio.sleep(delay)
def simulate_cpu_work(iterations: int = 1000) -> float:
"""
模拟一个CPU密集型计算任务。
返回计算出的虚拟结果。
"""
result = 0.0
for i in range(iterations):
result += (i * 0.001) ** 2
return result
文件路径:src/observability/tracing_setup.py
"""
配置和初始化OpenTelemetry追踪。
"""
import logging
from typing import Optional, Dict, Any
from opentelemetry import trace
from opentelemetry.exporter.console import ConsoleSpanExporter
from opentelemetry.exporter.jaeger import JaegerSpanExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, SimpleSpanProcessor
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
# 关键:必须安装并启用asyncio插桩以支持异步上下文传播
from opentelemetry.instrumentation.asyncio import AsyncioInstrumentor
_tracer_provider: Optional[TracerProvider] = None
def setup_tracing(config: Dict[str, Any]) -> None:
"""根据配置设置全局的TracerProvider。"""
global _tracer_provider
if not config['observability']['tracing']['enabled']:
return
tracing_config = config['observability']['tracing']
resource = Resource.create({
SERVICE_NAME: tracing_config['service_name'],
})
# 创建TracerProvider
_tracer_provider = TracerProvider(resource=resource)
trace.set_tracer_provider(_tracer_provider)
# 根据配置选择导出器
exporter_type = tracing_config['exporter']
span_exporter = None
if exporter_type == "console":
span_exporter = ConsoleSpanExporter()
# 控制台导出使用SimpleSpanProcessor更及时
span_processor = SimpleSpanProcessor(span_exporter)
elif exporter_type == "jaeger":
span_exporter = JaegerSpanExporter(
endpoint=tracing_config['jaeger_endpoint'],
)
# 生产环境建议使用BatchSpanProcessor
span_processor = BatchSpanProcessor(span_exporter)
else:
logging.warning(f"Unsupported exporter: {exporter_type}. Tracing disabled.")
return
_tracer_provider.add_span_processor(span_processor)
# **关键步骤**:启用asyncio插桩
# 这会修补事件循环,确保上下文在异步任务切换时正确传递。
AsyncioInstrumentor().instrument()
logging.info("OpenTelemetry tracing (with asyncio support) initialized.")
# 注意:FastAPIInstrumentor在app.py中调用,因为它需要app实例
async def shutdown_tracing() -> None:
"""关闭追踪提供者,刷新所有未导出的span。"""
global _tracer_provider
if _tracer_provider:
await _tracer_provider.shutdown()
_tracer_provider = None
logging.info("OpenTelemetry tracing shutdown.")
文件路径:src/observability/resource_monitor.py
"""
自定义资源竞争监控器,用于暴露asyncio事件循环和任务相关的指标。
"""
import asyncio
import time
import psutil # 需要安装: pip install psutil
from typing import Dict, Any, Optional
from prometheus_client import Gauge, Counter, start_http_server, REGISTRY
# 定义Prometheus指标
EVENT_LOOP_LATENCY = Gauge(
'asyncio_event_loop_latency_seconds',
'Estimated event loop scheduling latency',
['service']
)
ASYNC_TASKS_COUNT = Gauge(
'asyncio_tasks_count',
'Current number of pending/running asyncio tasks',
['service', 'state'] # state: pending, running
)
CPU_USAGE = Gauge(
'process_cpu_percent',
'Process CPU usage percentage',
['service']
)
MEMORY_USAGE = Gauge(
'process_memory_bytes',
'Process memory usage in bytes',
['service']
)
COMPETITION_EVENTS = Counter(
'resource_competition_events_total',
'Total number of detected potential resource competition events',
['service', 'resource_type'] # resource_type: cpu, memory, loop_block
)
_process = psutil.Process()
_monitor_task: Optional[asyncio.Task] = None
_monitor_config: Optional[Dict[str, Any]] = None
async def _measure_loop_latency() -> float:
"""
通过一个简单的协程来估算事件循环的调度延迟。
返回延迟时间(秒)。
"""
start = time.monotonic()
await asyncio.sleep(0) # 让出控制权,立即重新调度
end = time.monotonic()
return end - start
async def _sample_metrics(service_name: str):
"""定期采集并设置指标值。"""
try:
# 1. 事件循环延迟
latency = await _measure_loop_latency()
EVENT_LOOP_LATENCY.labels(service=service_name).set(latency)
# 如果延迟过高,记录一个竞争事件
if latency > 0.01: # 阈值10ms,可根据实际情况调整
COMPETITION_EVENTS.labels(service=service_name, resource_type='loop_block').inc()
# 2. 异步任务状态统计
all_tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()]
pending = sum(1 for t in all_tasks if t._state == 'PENDING')
running = sum(1 for t in all_tasks if t._state == 'RUNNING')
ASYNC_TASKS_COUNT.labels(service=service_name, state='pending').set(pending)
ASYNC_TASKS_COUNT.labels(service=service_name, state='running').set(running)
# 3. 进程CPU和内存使用率
CPU_USAGE.labels(service=service_name).set(_process.cpu_percent(interval=None))
MEMORY_USAGE.labels(service=service_name).set(_process.memory_info().rss)
# 4. 检测潜在的CPU/内存竞争(简单启发式)
if _process.cpu_percent(interval=None) > 80.0:
COMPETITION_EVENTS.labels(service=service_name, resource_type='cpu').inc()
if _process.memory_percent() > 80.0:
COMPETITION_EVENTS.labels(service=service_name, resource_type='memory').inc()
except Exception as e:
# 监控自身不应崩溃主应用
print(f"Error in resource monitor sampling: {e}")
async def monitor_loop(config: Dict[str, Any]):
"""监控循环,定期采集指标。"""
service_name = config['app']['name']
sample_interval = config['observability']['metrics'].get('sample_interval_sec', 5.0)
while True:
await _sample_metrics(service_name)
await asyncio.sleep(sample_interval)
async def start_resource_monitor(config: Dict[str, Any]) -> None:
"""启动资源监控器(指标HTTP服务器和后台采样循环)。"""
global _monitor_task, _monitor_config
if not config['observability']['metrics']['enabled']:
return
_monitor_config = config
metrics_port = config['observability']['metrics']['port']
# 启动Prometheus指标的HTTP服务器(在独立线程中)
start_http_server(metrics_port, registry=REGISTRY)
print(f"Metrics server started on port {metrics_port}")
# 启动后台采样任务
_monitor_task = asyncio.create_task(monitor_loop(config))
print("Resource monitor loop started.")
async def stop_resource_monitor() -> None:
"""停止资源监控器。"""
global _monitor_task
if _monitor_task and not _monitor_task.done():
_monitor_task.cancel()
try:
await _monitor_task
except asyncio.CancelledError:
pass
_monitor_task = None
print("Resource monitor stopped.")
文件路径:src/core/async_processor.py
"""
模拟核心的边缘异步处理逻辑,展示带追踪的异步操作。
"""
import asyncio
import random
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode
from src.utils.helpers import random_sleep, simulate_cpu_work
tracer = trace.get_tracer(__name__)
class AsyncDataProcessor:
"""模拟一个异步数据处理单元。"""
async def process(self, data_id: str, complexity: int = 1) -> Dict[str, Any]:
"""
处理一个数据项,包含模拟的I/O等待和CPU计算。
整个处理过程被包装在一个Span中,其内部子操作也创建子Span。
"""
# 为整个处理过程创建一个顶级Span
with tracer.start_as_current_span("process_data") as process_span:
process_span.set_attribute("data.id", data_id)
process_span.set_attribute("processing.complexity", complexity)
result = {"id": data_id, "steps": []}
try:
# 步骤1: 模拟数据获取(I/O密集型,异步)
with tracer.start_as_current_span("fetch_data") as fetch_span:
fetch_span.set_attribute("source", "mock_db")
# 模拟网络延迟
await random_sleep([0.02, 0.1])
result["steps"].append("fetched")
# 模拟获取的数据大小
mock_data_size = random.randint(100, 1000)
fetch_span.set_attribute("data.size_bytes", mock_data_size)
# 步骤2: 模拟数据转换(可能混合CPU和I/O)
with tracer.start_as_current_span("transform_data") as transform_span:
# 子步骤A: CPU计算
with tracer.start_as_current_span("compute_features") as compute_span:
# **注意**:这是一个同步CPU计算,会阻塞事件循环!
# 在实际应用中,应考虑使用`run_in_executor`将CPU密集型任务卸载到线程池。
feature_value = simulate_cpu_work(iterations=complexity * 5000)
compute_span.set_attribute("feature.value", feature_value)
# 记录一个事件,表明此处可能导致竞争
compute_span.add_event("cpu_intensive_work.completed")
# 子步骤B: 另一个异步I/O
await random_sleep([0.01, 0.05])
result["steps"].append("transformed")
transform_span.set_attribute("transformation.type", "normalization")
# 步骤3: 模拟结果存储(异步)
with tracer.start_as_current_span("store_result") as store_span:
store_span.set_attribute("store.backend", "mock_redis")
await random_sleep([0.03, 0.08])
result["steps"].append("stored")
result["status"] = "success"
process_span.set_status(Status(StatusCode.OK))
except Exception as e:
process_span.record_exception(e)
process_span.set_status(Status(StatusCode.ERROR, str(e)))
result["status"] = f"error: {e}"
raise
finally:
process_span.set_attribute("result.step_count", len(result["steps"]))
return result
async def batch_process(self, data_ids: List[str]) -> List[Dict[str, Any]]:
"""并发处理多个数据项,展示异步任务的并发执行与追踪。"""
# 创建一个代表整个批处理的Span
with tracer.start_as_current_span("batch_process") as batch_span:
batch_span.set_attribute("batch.size", len(data_ids))
# 并发创建多个处理任务
tasks = [self.process(did, random.randint(1,3)) for did in data_ids]
# **关键**:asyncio.gather会并发执行这些任务。
# 由于我们启用了AsyncioInstrumentor,每个任务内部的Span将自动关联到正确的trace。
results = await asyncio.gather(*tasks, return_exceptions=True)
# 处理结果,区分成功和失败
successful = [r for r in results if not isinstance(r, Exception)]
batch_span.set_attribute("batch.success_count", len(successful))
return successful
文件路径:src/core/device_simulator.py
"""
模拟边缘设备控制,包含可能长时间运行或资源竞争的操作。
"""
import asyncio
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
class DeviceSimulator:
"""模拟一个需要异步控制的设备。"""
def __init__(self, device_id: str):
self.device_id = device_id
self._state = "offline"
self._lock = asyncio.Lock() # 用于模拟对设备资源的互斥访问
async def update_state(self, new_state: str, duration: float):
"""
更新设备状态,模拟一个耗时操作。
使用锁来模拟设备访问竞争,并展示锁等待时间如何影响追踪。
"""
span_name = f"device.{self.device_id}.update_state"
with tracer.start_as_current_span(span_name) as span:
span.set_attribute("device.id", self.device_id)
span.set_attribute("target.state", new_state)
span.set_attribute("operation.duration", duration)
# **竞争场景模拟**:多个请求可能同时试图更新同一设备。
# 获取锁,等待时间反映了资源竞争的程度。
wait_start = asyncio.get_event_loop().time()
async with self._lock:
wait_time = asyncio.get_event_loop().time() - wait_start
span.set_attribute("lock.wait_seconds", wait_time)
# 如果等待时间过长,在span中添加一个事件作为标记
if wait_time > 0.05:
span.add_event("high_lock_contention", {"wait_time": wait_time})
self._state = new_state
# 模拟状态更新所需的耗时操作
await asyncio.sleep(duration)
span.set_attribute("final.state", self._state)
return {"device_id": self.device_id, "state": self._state, "lock_wait": wait_time}
async def read_sensor(self) -> Dict[str, Any]:
"""模拟读取设备传感器数据(快速操作)。"""
with tracer.start_as_current_span(f"device.{self.device_id}.read_sensor") as span:
# 模拟一点延迟
await asyncio.sleep(random.uniform(0.005, 0.02))
reading = {
"device_id": self.device_id,
"temperature": random.uniform(20.0, 35.0),
"humidity": random.uniform(30.0, 80.0),
"timestamp": time.time()
}
span.set_attributes({f"sensor.{k}": v for k, v in reading.items() if k != 'device_id'})
return reading
文件路径:src/app.py
"""
FastAPI路由定义,集成可观测性插桩。
"""
import asyncio
import random
from typing import List
from fastapi import FastAPI, APIRouter, BackgroundTasks
from fastapi.responses import JSONResponse
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from src.core.async_processor import AsyncDataProcessor
from src.core.device_simulator import DeviceSimulator
from src.utils.helpers import load_config
# 加载配置
config = load_config()
# 创建核心业务对象
processor = AsyncDataProcessor()
# 模拟两个设备
devices = {
"sensor-01": DeviceSimulator("sensor-01"),
"actuator-02": DeviceSimulator("actuator-02"),
}
router = APIRouter()
@router.get("/")
async def root():
return {"message": "Edge Async Observability Demo", "status": "operational"}
@router.get("/health")
async def health():
"""健康检查端点,快速响应。"""
return {"status": "healthy"}
@router.get("/process/{data_id}")
async def process_single(data_id: str, complexity: int = 1):
"""处理单个数据项。"""
try:
result = await processor.process(data_id, complexity)
return result
except Exception as e:
return JSONResponse(status_code=500, content={"error": str(e)})
@router.post("/process/batch")
async def process_batch(data_ids: List[str]):
"""批量处理数据项。"""
if len(data_ids) > 20: # 简单限制
return JSONResponse(status_code=400, content={"error": "Batch too large"})
try:
results = await processor.batch_process(data_ids)
return {"batch_results": results}
except Exception as e:
return JSONResponse(status_code=500, content={"error": str(e)})
@router.post("/device/{device_id}/command")
async def command_device(device_id: str, state: str, duration: float = 0.5):
"""向模拟设备发送控制命令。"""
device = devices.get(device_id)
if not device:
return JSONResponse(status_code=404, content={"error": "Device not found"})
if duration > 5.0:
return JSONResponse(status_code=400, content={"error": "Duration too long"})
try:
result = await device.update_state(state, duration)
return result
except asyncio.TimeoutError:
return JSONResponse(status_code=504, content={"error": "Device command timeout"})
except Exception as e:
return JSONResponse(status_code=500, content={"error": str(e)})
@router.get("/device/{device_id}/sensor")
async def read_device_sensor(device_id: str):
"""读取设备传感器数据。"""
device = devices.get(device_id)
if not device:
return JSONResponse(status_code=404, content={"error": "Device not found"})
try:
reading = await device.read_sensor()
return reading
except Exception as e:
return JSONResponse(status_code=500, content={"error": str(e)})
@router.post("/stress")
async def induce_stress(background_tasks: BackgroundTasks, duration: int = 10):
"""
触发一个后台压力测试任务,用于生成资源竞争和可观测性数据。
警告:此端点仅用于演示,生产环境不应暴露。
"""
async def _stress_task():
"""一个模拟压力任务,创建大量并发异步操作。"""
print(f"Starting stress task for {duration}s")
start_time = asyncio.get_event_loop().time()
tasks = set()
while asyncio.get_event_loop().time() - start_time < duration:
# 持续创建短期和长期任务混合
task_type = random.choice(['short_io', 'long_io', 'cpu_burst'])
if task_type == 'short_io':
t = asyncio.create_task(asyncio.sleep(random.uniform(0.01, 0.1)))
elif task_type == 'long_io':
t = asyncio.create_task(asyncio.sleep(random.uniform(0.5, 2.0)))
else: # cpu_burst
# 注意:这会在事件循环中直接执行CPU计算,会导致阻塞!
t = asyncio.create_task(asyncio.sleep(0)) # 只是一个占位,实际在任务函数内计算
async def cpu_task():
simulate_cpu_work(iterations=20000)
t = asyncio.create_task(cpu_task())
tasks.add(t)
# 限制最大并发任务数
if len(tasks) > 50:
# 等待一部分任务完成
done, pending = await asyncio.wait(tasks, timeout=0.1, return_when=asyncio.FIRST_COMPLETED)
tasks = pending
# 清理完成的任务,防止内存泄露
for done_task in done:
if done_task.exception():
print(f"Stress task error: {done_task.exception()}")
# 等待剩余任务完成
if tasks:
await asyncio.wait(tasks, timeout=5.0)
print("Stress task completed.")
background_tasks.add_task(_stress_task)
return {"message": f"Stress test started for {duration} seconds in background."}
# 注意:FastAPIInstrumentor需要在app实例创建后调用,我们在run.py中完成。
4 安装依赖与运行步骤
4.1 环境准备
确保你已安装Python 3.8+ 和 pip。
4.2 安装依赖
在项目根目录下执行:
pip install -r requirements.txt
如果需要使用Jaeger导出器,请确保已安装并运行Jaeger。一个快速的方法是使用Docker:
docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 14250:14250 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
4.3 运行应用
- 使用控制台追踪导出器(默认):
直接运行主脚本即可,追踪信息将打印在控制台。
python run.py
- 使用Jaeger追踪导出器:
修改config.yaml文件中的observability.tracing.exporter为jaeger。
确保Jaeger服务已在jaeger_endpoint指定的地址运行(默认http://localhost:14268/api/traces)。
然后运行:
python run.py
应用启动后,你将看到类似以下输出:
INFO: Started server process [xxxx]
INFO: Waiting for application startup.
Starting observability components...
OpenTelemetry tracing (with asyncio support) initialized.
Metrics server started on port 9464
Resource monitor loop started.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
5 测试与验证
5.1 验证服务端点
使用 curl 或浏览器访问以下端点:
- 健康检查:
curl http://localhost:8080/health
应返回:`{"status":"healthy"}`
- 处理单个数据:
curl "http://localhost:8080/process/test-data-123?complexity=2"
将触发一个带追踪的异步处理流程。
- 批量处理:
curl -X POST http://localhost:8080/process/batch \
-H "Content-Type: application/json" \
-d '["id1", "id2", "id3"]'
观察控制台输出的追踪信息,你会看到 `batch_process` Span 下包含三个并发的 `process_data` 子Span。
- 设备控制:
curl -X POST "http://localhost:8080/device/sensor-01/command?state=active&duration=1.0"
模拟设备状态更新,包含锁竞争监控。
5.2 验证可观测性数据
-
追踪(控制台):
访问上述端点后,观察应用控制台日志。你应能看到结构化的Span输出,展示了父子关系和异步执行的时间线。 -
追踪(Jaeger UI):
如果使用Jaeger,打开浏览器访问http://localhost:16686。在Service下拉菜单中选择edge-async-demo,点击Find Traces。你将看到请求的追踪链路图,可以清晰地看到异步任务间的关联和耗时。 -
指标(Prometheus格式):
访问指标端点以查看暴露的Prometheus指标:
curl http://localhost:9464/metrics
输出应包含我们自定义的指标,如:
# HELP asyncio_event_loop_latency_seconds Estimated event loop scheduling latency
# TYPE asyncio_event_loop_latency_seconds gauge
asyncio_event_loop_latency_seconds{service="edge-async-demo"} 0.000123456
# HELP asyncio_tasks_count Current number of pending/running asyncio tasks
# TYPE asyncio_tasks_count gauge
asyncio_tasks_count{service="edge-async-demo",state="pending"} 5.0
asyncio_tasks_count{service="edge-async-demo",state="running"} 1.0
# HELP resource_competition_events_total Total number of detected potential resource competition events
# TYPE resource_competition_events_total counter
resource_competition_events_total{service="edge-async-demo",resource_type="loop_block"} 2.0
- 触发资源竞争:
为了更明显地观察资源竞争事件,可以调用压力测试端点(仅用于演示):
curl -X POST "http://localhost:8080/stress?duration=15"
然后立即刷新 `/metrics` 端点或观察Jaeger中的追踪。你可能会看到 `event_loop_latency` 增加,`resource_competition_events_total{resource_type="loop_block"}` 计数器增长,并且在追踪中,CPU密集型任务的Span内会记录 `cpu_intensive_work.completed` 事件。
通过以上步骤,你可以直观地体验到在一个模拟的边缘异步服务中,如何通过集成的工具链来穿透异步运行时的"黑盒",实现对请求链路、资源使用和潜在竞争的可观测。