Compare commits
94 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c254923bd6 | ||
|
|
e865006cee | ||
|
|
1430ecf366 | ||
|
|
ce739d1232 | ||
|
|
ba633c8be4 | ||
|
|
44843475c8 | ||
|
|
5e9dac6197 | ||
|
|
77743ce097 | ||
|
|
28667bcbee | ||
|
|
395b3f3c4e | ||
|
|
0c3ef3d07e | ||
|
|
83e8a96b2b | ||
|
|
0a8efaf29d | ||
|
|
34bf4d5e91 | ||
|
|
24b5f8455e | ||
|
|
e60fa2b982 | ||
|
|
b79cb09987 | ||
|
|
d7ac1bedb4 | ||
|
|
a876c854f4 | ||
|
|
18c81fb8bb | ||
|
|
d8c33ca3ee | ||
|
|
27c2fb0b95 | ||
|
|
bdc723d197 | ||
|
|
531913a83c | ||
|
|
b7e7c173d7 | ||
|
|
14796231f1 | ||
|
|
1826d4e34a | ||
|
|
e373e1ce62 | ||
|
|
4467b62a01 | ||
|
|
3519003a77 | ||
|
|
7362bcf206 | ||
|
|
2cd6c6ef27 | ||
|
|
cb27d1a249 | ||
|
|
710b6a719e | ||
|
|
66b5702d03 | ||
|
|
90634dfacf | ||
|
|
c9784ec48e | ||
|
|
cf1842f4e5 | ||
|
|
f1a215d504 | ||
|
|
9beba2a306 | ||
|
|
a3340cb630 | ||
|
|
5afcd3d86a | ||
|
|
e65de2c30b | ||
|
|
bfb40f4947 | ||
|
|
9854f478c0 | ||
|
|
3963d137de | ||
|
|
60d8a3d550 | ||
|
|
09f20ec81d | ||
|
|
06ddd5a8e1 | ||
|
|
a14fcdf158 | ||
|
|
6aa8ba5fbc | ||
|
|
2eae34bb96 | ||
|
|
05ae5dff2a | ||
|
|
64fc023f7b | ||
|
|
307a26c0db | ||
|
|
73401309f2 | ||
|
|
f169dd4267 | ||
|
|
09887b52d0 | ||
|
|
c94b697319 | ||
|
|
1bfee2026f | ||
|
|
c47e483028 | ||
|
|
864cbc3569 | ||
|
|
47a795cd73 | ||
|
|
92f97b7e51 | ||
|
|
3bdde47c60 | ||
|
|
1583758f29 | ||
|
|
0602e37bc9 | ||
|
|
41bdb38d51 | ||
|
|
d958aa8d74 | ||
|
|
2024523b46 | ||
|
|
da722ee07e | ||
|
|
bc8414df3d | ||
|
|
15f4b610af | ||
|
|
94ed27199a | ||
|
|
131c011c5c | ||
|
|
6d5b8bbb08 | ||
|
|
b4415f25ac | ||
|
|
d469dacc08 | ||
|
|
ee4354b571 | ||
|
|
50818b54ca | ||
|
|
480cdf3f6d | ||
|
|
832fd0fde1 | ||
|
|
fd0a52368a | ||
|
|
650cc6f1b5 | ||
|
|
ff1c53ea40 | ||
|
|
b79655ccdc | ||
|
|
1f12197a91 | ||
|
|
d0455fb032 | ||
|
|
b2f6f7d2d0 | ||
|
|
ff439bb831 | ||
|
|
3980d66b49 | ||
| b16d910051 | |||
|
|
7e3a8e12d0 | ||
|
|
1d728d991e |
23
.env.example
23
.env.example
@@ -1,10 +1,19 @@
|
||||
# PromdataPanel Environment Configuration
|
||||
# Note: Database and Cache settings will be automatically configured upon visiting /init.html
|
||||
|
||||
# Server Binding
|
||||
HOST=0.0.0.0
|
||||
PORT=3051
|
||||
PORT=3000
|
||||
|
||||
# Aggregation interval in milliseconds (default 5s)
|
||||
REFRESH_INTERVAL=5000
|
||||
|
||||
# Valkey/Redis Cache Configuration
|
||||
VALKEY_HOST=localhost
|
||||
VALKEY_PORT=6379
|
||||
VALKEY_PASSWORD=
|
||||
VALKEY_DB=dashboard
|
||||
VALKEY_TTL=30
|
||||
# Security
|
||||
# Keep remote setup disabled unless you explicitly need to initialize from another host.
|
||||
ALLOW_REMOTE_SETUP=false
|
||||
COOKIE_SECURE=false
|
||||
SESSION_TTL_SECONDS=86400
|
||||
PASSWORD_ITERATIONS=210000
|
||||
|
||||
# Runtime external data providers
|
||||
ENABLE_EXTERNAL_GEO_LOOKUP=false
|
||||
|
||||
155
README.md
155
README.md
@@ -1,100 +1,119 @@
|
||||
# 数据可视化展示大屏
|
||||
# PromdataPanel
|
||||
|
||||
多源 Prometheus 服务器监控展示大屏,支持对接多个 Prometheus 实例,实时展示所有服务器的 CPU、内存、磁盘、网络等关键指标。
|
||||
多源 Prometheus 服务器监控展示大屏。支持对接多个 Prometheus 实例,实时聚合展示所有服务器的 CPU、内存、磁盘、带宽等关键指标,并提供可视化节点分布图。
|
||||
|
||||
## 功能特性
|
||||
|
||||
- 🔌 **多数据源管理** - MySQL 存储配置,支持对接多个 Prometheus 实例
|
||||
- 📊 **NodeExporter 数据查询** - 自动聚合所有 Prometheus 中的 NodeExporter 数据
|
||||
- 🌐 **网络流量统计** - 24 小时网络流量趋势图,总流量统计
|
||||
- ⚡ **实时带宽监控** - 所有服务器网络带宽求和,实时显示
|
||||
- 💻 **资源使用概览** - CPU、内存、磁盘的总使用率和详细统计
|
||||
- 🖥️ **服务器列表** - 所有服务器的详细指标一览表
|
||||
- 🔌 **多数据源管理** - 支持对接多个 Prometheus 实例(Node_Exporter / BlackboxExporter)
|
||||
- 📊 **指标自动聚合** - 自动汇总所有数据源的 NodeExporter 指标,实时计算全网负载
|
||||
- 🌐 **网络流量统计** - 24 小时流量趋势图,实时带宽(Rx/Tx)求和显示
|
||||
- 🗺️ **节点分布可视化** - 自动识别服务器地理位置,并在全球地图上展示实时连接状态与延迟
|
||||
- ⚡ **毫秒级实时性** - 深度优化查询逻辑,支持 5s 采集频率的实时动态展示
|
||||
- 📱 **响应式与美学设计** - 现代 UI/UX 体验,支持暗色模式,极致性能优化
|
||||
|
||||
## 快速开始
|
||||
## 快速安装
|
||||
|
||||
### 1. 环境要求
|
||||
### 方式一:一键脚本安装 (推荐)
|
||||
|
||||
- Node.js >= 16
|
||||
- MySQL >= 5.7
|
||||
- Valkey >= 7.0 (或 Redis >= 6.0)
|
||||
|
||||
### 2. 配置
|
||||
|
||||
复制环境变量文件并修改:
|
||||
在 Linux 服务器上,您可以使用以下脚本一键完成下载、环境检测、依赖安装并将其注册为 Systemd 系统服务:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
# 下载安装最新版本 (默认 v0.1.0)
|
||||
VERSION=v0.1.0 curl -sSL https://git.littlediary.cn/CN-JS-HuiBai/PromdataPanel/raw/branch/main/install.sh | bash
|
||||
```
|
||||
|
||||
编辑 `.env` 文件,配置 MySQL 和 Valkey 连接信息:
|
||||
### 方式二:手动安装
|
||||
|
||||
```env
|
||||
# MySQL 配置
|
||||
MYSQL_HOST=localhost
|
||||
MYSQL_PORT=3306
|
||||
MYSQL_USER=root
|
||||
MYSQL_PASSWORD=your_password
|
||||
MYSQL_DATABASE=display_wall
|
||||
#### 1. 环境要求
|
||||
- **Node.js** >= 18
|
||||
- **MySQL** >= 8.0
|
||||
- **Valkey** >= 7.0 (或 Redis >= 6.0)
|
||||
|
||||
# Valkey/Redis 缓存配置 (可选)
|
||||
VALKEY_HOST=localhost
|
||||
VALKEY_PORT=6379
|
||||
VALKEY_PASSWORD=
|
||||
VALKEY_TTL=30
|
||||
#### 2. 配置与启动
|
||||
1. 克隆代码库:`git clone https://git.littlediary.cn/CN-JS-HuiBai/PromdataPanel.git`
|
||||
2. 复制配置文件:`cp .env.example .env`
|
||||
3. 安装依赖:`npm install --production`
|
||||
4. 启动服务:`npm start`
|
||||
|
||||
PORT=3000
|
||||
```
|
||||
### 方式三:更新现有版本
|
||||
|
||||
### 3. 系统初始化
|
||||
|
||||
访问 `http://localhost:3000/init.html`,按照引导完成数据库和缓存的初始化。
|
||||
|
||||
### 4. 安装依赖并启动
|
||||
如果您已经安装了本系统,可以使用随附的 `update.sh` 脚本一键升级到最新代码:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
npm run dev
|
||||
# 进入程序目录
|
||||
curl -sSL https://git.littlediary.cn/CN-JS-HuiBai/PromdataPanel/raw/branch/main/update.sh | bash
|
||||
```
|
||||
|
||||
访问 `http://localhost:3000` 即可看到展示大屏。
|
||||
#### 3. 系统初始化
|
||||
首次运行后,访问 `http://your-ip:3000/init.html`,按照引导完成 MySQL 数据库和 Valkey 缓存的连接。
|
||||
|
||||
### 5. 配置 Prometheus 数据源
|
||||
## 使用指引
|
||||
|
||||
点击右上角的 ⚙️ 按钮,添加你的 Prometheus 地址(如 `http://prometheus.example.com:9090`)。
|
||||
### 1. 添加 Prometheus 数据源
|
||||
点击页面右上角的 ⚙️ 按钮进入设置,添加并测试您的 Prometheus HTTP 地址。
|
||||
|
||||
### 6. Prometheus 配置参考 (Example)
|
||||
|
||||
在您的 Prometheus 配置文件 `prometheus.yml` 中,建议执行以下配置(`scrape_interval` 建议设为 `5s` 以获取最佳实时展示效果):
|
||||
### 2. Prometheus 采集配置
|
||||
建议在 `prometheus.yml` 中设置采集周期为 `5s` 以实现平滑的实时动态效果:
|
||||
|
||||
```yaml
|
||||
global:
|
||||
scrape_interval: 5s
|
||||
scrape_configs:
|
||||
- job_name: '机器名称'
|
||||
static_configs:
|
||||
- targets: ['IP:Port']
|
||||
```
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'nodes'
|
||||
static_configs:
|
||||
- targets: ['your-server-ip:9100']
|
||||
```
|
||||
|
||||
## 技术栈
|
||||
|
||||
- **后端**: Node.js + Express
|
||||
- **数据库**: MySQL (存储配置数据)
|
||||
- **缓存**: Valkey / Redis (用于加速流量计算结果读取)
|
||||
- **数据源**: Prometheus HTTP API
|
||||
- **前端**: 原生 HTML/CSS/JavaScript
|
||||
- **图表**: 自定义 Canvas 渲染
|
||||
- **Runtime**: Node.js
|
||||
- **Framework**: Express.js
|
||||
- **Database**: MySQL 8.0+
|
||||
- **Caching**: Valkey / Redis
|
||||
- **Visualization**: ECharts / Canvas
|
||||
- **Frontend**: Vanilla JS / CSS3
|
||||
|
||||
## API 接口
|
||||
## API 接口文档
|
||||
|
||||
| 方法 | 路径 | 说明 |
|
||||
|------|------|------|
|
||||
| GET | `/api/sources` | 获取所有数据源 |
|
||||
| POST | `/api/sources` | 添加数据源 |
|
||||
| PUT | `/api/sources/:id` | 更新数据源 |
|
||||
| DELETE | `/api/sources/:id` | 删除数据源 |
|
||||
| POST | `/api/sources/test` | 测试数据源连接 |
|
||||
| GET | `/api/metrics/overview` | 获取聚合指标概览 |
|
||||
| GET | `/api/metrics/network-history` | 获取24h网络流量历史 |
|
||||
| GET | `/api/metrics/cpu-history` | 获取CPU使用率历史 |
|
||||
本项提供了完整的 RESTful API,用于数据采集、系统配置和状态监控。
|
||||
|
||||
### 1. 认证接口 (`/api/auth`)
|
||||
- `POST /api/auth/login`: 用户登录
|
||||
- `POST /api/auth/logout`: 退出登录
|
||||
- `POST /api/auth/change-password`: 修改密码 (需登录)
|
||||
- `GET /api/auth/status`: 获取当前登录状态
|
||||
|
||||
### 2. 数据源管理 (`/api/sources`)
|
||||
- `GET /api/sources`: 获取所有 Prometheus 数据源及其状态
|
||||
- `POST /api/sources`: 添加新数据源 (需登录)
|
||||
- `PUT /api/sources/:id`: 修改数据源信息 (需登录)
|
||||
- `DELETE /api/sources/:id`: 删除数据源 (需登录)
|
||||
- `POST /api/sources/test`: 测试数据源连接性 (需登录)
|
||||
|
||||
### 3. 指标数据获取 (`/api/metrics`)
|
||||
- `GET /api/metrics/overview`: 获取所有服务器的聚合实时指标 (CPU, 内存, 磁盘, 网络)
|
||||
- `GET /api/metrics/network-history`: 获取全网 24 小时流量历史趋势
|
||||
- `GET /api/metrics/cpu-history`: 获取全网 CPU 使用率历史记录
|
||||
- `GET /api/metrics/server-details`: 获取特定服务器的详细实时指标
|
||||
- `GET /api/metrics/server-history`: 获取特定服务器的历史指标数据
|
||||
- `GET /api/metrics/latency`: 获取节点间的实时延迟数据
|
||||
|
||||
### 4. 系统配置与监控
|
||||
- `GET /api/settings`: 获取站点全局配置
|
||||
- `POST /api/settings`: 修改站点全局配置 (需登录)
|
||||
- `GET /health`: 获取系统健康检查报告 (数据库、缓存、内存等状态)
|
||||
|
||||
### 5. 延迟链路管理 (`/api/latency-routes`)
|
||||
- `GET /api/latency-routes`: 获取配置的所有延迟检测链路
|
||||
- `POST /api/latency-routes`: 添加延迟检测链路 (需登录)
|
||||
- `PUT /api/latency-routes/:id`: 修改延迟检测链路 (需登录)
|
||||
- `DELETE /api/latency-routes/:id`: 删除延迟检测链路 (需登录)
|
||||
|
||||
### 6. 实时通信 (WebSocket)
|
||||
系统支持通过 WebSocket 接收实时推送,默认端口与 HTTP 服务一致:
|
||||
- **消息类型 `overview`**: 包含聚合指标、服务器在线状态以及地理分布后的延迟链路数据。
|
||||
|
||||
## LICENSE
|
||||
|
||||
MIT License
|
||||
|
||||
416
install.sh
416
install.sh
@@ -1,238 +1,262 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Data Visualization Display Wall - Systemd Installer
|
||||
# Requirements: Node.js, NPM, Systemd (Linux)
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
NC='\033[0m'
|
||||
|
||||
echo -e "${BLUE}=== Data Visualization Display Wall Installer ===${NC}"
|
||||
VERSION=${VERSION:-"v0.1.0"}
|
||||
DOWNLOAD_URL="https://git.littlediary.cn/CN-JS-HuiBai/PromdataPanel/archive/${VERSION}.zip"
|
||||
MIN_NODE_VERSION=18
|
||||
SERVICE_NAME="promdatapanel"
|
||||
SERVICE_FILE="/etc/systemd/system/${SERVICE_NAME}.service"
|
||||
|
||||
# 1. Permission check (no longer mandatory)
|
||||
if [ "$EUID" -eq 0 ]; then
|
||||
# If run as sudo, get the real user that called it
|
||||
REAL_USER=${SUDO_USER:-$USER}
|
||||
else
|
||||
REAL_USER=$USER
|
||||
fi
|
||||
OS_ID=""
|
||||
OS_VER=""
|
||||
PROJECT_DIR=""
|
||||
REAL_USER=""
|
||||
|
||||
# 2. Get current directory and user
|
||||
PROJECT_DIR=$(pwd)
|
||||
USER_HOME=$(getent passwd "$REAL_USER" | cut -d: -f6)
|
||||
echo -e "${BLUE}================================================${NC}"
|
||||
echo -e "${BLUE} PromdataPanel Auto-Installer ${NC}"
|
||||
echo -e "${BLUE} Version: ${VERSION} ${NC}"
|
||||
echo -e "${BLUE}================================================${NC}"
|
||||
|
||||
echo -e "Project Directory: ${GREEN}$PROJECT_DIR${NC}"
|
||||
echo -e "Running User: ${GREEN}$REAL_USER${NC}"
|
||||
|
||||
# 3. Check for mandatory files
|
||||
if [ ! -f "server/index.js" ]; then
|
||||
echo -e "${RED}Error: server/index.js not found. Please run this script from the project root.${NC}"
|
||||
detect_os() {
|
||||
if [ -f /etc/os-release ]; then
|
||||
# shellcheck disable=SC1091
|
||||
. /etc/os-release
|
||||
OS_ID="${ID:-}"
|
||||
OS_VER="${VERSION_ID:-}"
|
||||
else
|
||||
echo -e "${RED}Error: Cannot detect operating system type (/etc/os-release missing).${NC}"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# 4. Check for dependencies
|
||||
echo -e "${BLUE}Checking dependencies...${NC}"
|
||||
check_dep() {
|
||||
if ! command -v "$1" &> /dev/null; then
|
||||
echo -e "${RED}$1 is not installed. Please install $1 first.${NC}"
|
||||
if [ -z "$OS_ID" ]; then
|
||||
echo -e "${RED}Error: Unable to determine operating system ID.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "Detected OS: ${GREEN}${OS_ID} ${OS_VER}${NC}"
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
local cmd="$1"
|
||||
local hint="${2:-}"
|
||||
if ! command -v "$cmd" >/dev/null 2>&1; then
|
||||
echo -e "${RED}Missing required command: ${cmd}.${NC}"
|
||||
if [ -n "$hint" ]; then
|
||||
echo -e "${YELLOW}${hint}${NC}"
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
check_dep node
|
||||
check_dep npm
|
||||
|
||||
# 5. Check for .env file
|
||||
if [ ! -f ".env" ]; then
|
||||
echo -e "${YELLOW}Warning: .env file not found.${NC}"
|
||||
if [ -f ".env.example" ]; then
|
||||
echo -e "Creating .env from .env.example..."
|
||||
cp .env.example .env
|
||||
echo -e "${GREEN}Created .env file. Please ensure values are correct.${NC}"
|
||||
else
|
||||
echo -e "${RED}Error: .env.example not found. Configuration missing.${NC}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# 6. Install NPM dependencies
|
||||
echo -e "${BLUE}Installing dependencies...${NC}"
|
||||
npm install
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo -e "${RED}NPM install failed.${NC}"
|
||||
install_packages() {
|
||||
case "$OS_ID" in
|
||||
ubuntu|debian|raspbian)
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y "$@"
|
||||
;;
|
||||
centos|rhel|almalinux|rocky)
|
||||
sudo yum install -y "$@"
|
||||
;;
|
||||
fedora)
|
||||
sudo dnf install -y "$@"
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unsupported OS for automatic package installation: ${OS_ID}${NC}"
|
||||
echo -e "${YELLOW}Please install the following packages manually: $*${NC}"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# 7. Create Systemd Service File
|
||||
SERVICE_FILE="/etc/systemd/system/promdatapanel.service"
|
||||
NODE_PATH=$(command -v node)
|
||||
ensure_tooling() {
|
||||
if ! command -v curl >/dev/null 2>&1; then
|
||||
echo -e "${BLUE}Installing curl...${NC}"
|
||||
install_packages curl
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}Creating systemd service at $SERVICE_FILE... (May require password)${NC}"
|
||||
sudo bash -c "cat <<EOF > '$SERVICE_FILE'
|
||||
if ! command -v unzip >/dev/null 2>&1; then
|
||||
echo -e "${BLUE}Installing unzip...${NC}"
|
||||
install_packages unzip
|
||||
fi
|
||||
}
|
||||
|
||||
configure_nodesource_apt_repo() {
|
||||
sudo install -d -m 0755 /etc/apt/keyrings
|
||||
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
|
||||
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list >/dev/null
|
||||
}
|
||||
|
||||
install_node() {
|
||||
echo -e "${BLUE}Verifying Node.js environment...${NC}"
|
||||
|
||||
local node_installed=false
|
||||
if command -v node >/dev/null 2>&1; then
|
||||
local current_node_ver
|
||||
current_node_ver=$(node -v | cut -d'v' -f2 | cut -d'.' -f1)
|
||||
if [ "$current_node_ver" -ge "$MIN_NODE_VERSION" ]; then
|
||||
echo -e "${GREEN}Node.js $(node -v) is already installed.${NC}"
|
||||
node_installed=true
|
||||
else
|
||||
echo -e "${YELLOW}Existing Node.js $(node -v) is too old (requires >= ${MIN_NODE_VERSION}).${NC}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$node_installed" = true ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}Installing Node.js 20.x...${NC}"
|
||||
case "$OS_ID" in
|
||||
ubuntu|debian|raspbian)
|
||||
install_packages ca-certificates curl gnupg
|
||||
configure_nodesource_apt_repo
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y nodejs
|
||||
;;
|
||||
centos|rhel|almalinux|rocky)
|
||||
install_packages nodejs
|
||||
;;
|
||||
fedora)
|
||||
install_packages nodejs
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unsupported OS for automatic Node.js installation: ${OS_ID}${NC}"
|
||||
echo -e "${YELLOW}Please install Node.js >= ${MIN_NODE_VERSION} manually.${NC}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
require_cmd node "Please install Node.js >= ${MIN_NODE_VERSION} manually and rerun the installer."
|
||||
local installed_major
|
||||
installed_major=$(node -v | cut -d'v' -f2 | cut -d'.' -f1)
|
||||
if [ "$installed_major" -lt "$MIN_NODE_VERSION" ]; then
|
||||
echo -e "${RED}Installed Node.js $(node -v) is still below the required version.${NC}"
|
||||
echo -e "${YELLOW}Please upgrade Node.js manually to >= ${MIN_NODE_VERSION}.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
download_project_if_needed() {
|
||||
if [ -f "server/index.js" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Project files not found. Starting download...${NC}"
|
||||
ensure_tooling
|
||||
|
||||
local temp_dir
|
||||
temp_dir=$(mktemp -d "${TMPDIR:-/tmp}/promdatapanel-install-XXXXXX")
|
||||
local temp_zip="${temp_dir}/promdatapanel_${VERSION}.zip"
|
||||
|
||||
echo -e "${BLUE}Downloading ${DOWNLOAD_URL}...${NC}"
|
||||
curl -fL "$DOWNLOAD_URL" -o "$temp_zip"
|
||||
|
||||
echo -e "${BLUE}Extracting files...${NC}"
|
||||
unzip -q "$temp_zip" -d "$temp_dir"
|
||||
|
||||
local extracted_dir
|
||||
extracted_dir=$(find "$temp_dir" -mindepth 1 -maxdepth 1 -type d | head -n 1)
|
||||
if [ -z "$extracted_dir" ] || [ ! -f "$extracted_dir/server/index.js" ]; then
|
||||
echo -e "${RED}Download succeeded, but archive structure is invalid.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$extracted_dir"
|
||||
}
|
||||
|
||||
detect_runtime_user() {
|
||||
if [ "$EUID" -eq 0 ]; then
|
||||
REAL_USER="${SUDO_USER:-${USER:-root}}"
|
||||
else
|
||||
REAL_USER="${USER}"
|
||||
fi
|
||||
}
|
||||
|
||||
write_service_file() {
|
||||
local node_path
|
||||
node_path=$(command -v node)
|
||||
if [ -z "$node_path" ]; then
|
||||
echo -e "${RED}Unable to locate node executable after installation.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
local tmp_service
|
||||
tmp_service=$(mktemp "${TMPDIR:-/tmp}/${SERVICE_NAME}.service.XXXXXX")
|
||||
|
||||
cat > "$tmp_service" <<EOF
|
||||
[Unit]
|
||||
Description=Data Visualization Display Wall
|
||||
Description=PromdataPanel Monitoring Dashboard
|
||||
After=network.target mysql.service redis-server.service valkey-server.service
|
||||
Wants=mysql.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=$REAL_USER
|
||||
WorkingDirectory=$PROJECT_DIR
|
||||
ExecStart=$NODE_PATH server/index.js
|
||||
User=${REAL_USER}
|
||||
WorkingDirectory=${PROJECT_DIR}
|
||||
ExecStart=${node_path} server/index.js
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=syslog
|
||||
StandardError=syslog
|
||||
SyslogIdentifier=promdatapanel
|
||||
# Pass environment via .env file injection
|
||||
EnvironmentFile=-$PROJECT_DIR/.env
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=${SERVICE_NAME}
|
||||
EnvironmentFile=-${PROJECT_DIR}/.env
|
||||
Environment=NODE_ENV=production
|
||||
|
||||
# Security Hardening
|
||||
CapabilityBoundingSet=
|
||||
NoNewPrivileges=true
|
||||
LimitNOFILE=65535
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF"
|
||||
EOF
|
||||
|
||||
# 8. Reload Systemd and Start
|
||||
echo -e "${BLUE}Reloading systemd and restarting service... (May require password)${NC}"
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable promdatapanel
|
||||
sudo systemctl restart promdatapanel
|
||||
echo -e "${BLUE}Creating systemd service at ${SERVICE_FILE}...${NC}"
|
||||
sudo install -m 0644 "$tmp_service" "$SERVICE_FILE"
|
||||
rm -f "$tmp_service"
|
||||
}
|
||||
|
||||
# 9. Check Status
|
||||
echo -e "${BLUE}Checking service status...${NC}"
|
||||
sleep 2
|
||||
if sudo systemctl is-active --quiet promdatapanel; then
|
||||
echo -e "${GREEN}SUCCESS: Service is now running.${NC}"
|
||||
PORT=$(grep "^PORT=" .env | cut -d'=' -f2)
|
||||
PORT=${PORT:-3000}
|
||||
echo -e "Dashboard URL: ${YELLOW}http://localhost:${PORT}${NC}"
|
||||
echo -e "View logs: ${BLUE}journalctl -u promdatapanel -f${NC}"
|
||||
else
|
||||
echo -e "${RED}FAILED: Service failed to start.${NC}"
|
||||
echo -e "Check logs with: ${BLUE}journalctl -u promdatapanel -xe${NC}"
|
||||
detect_os
|
||||
download_project_if_needed
|
||||
detect_runtime_user
|
||||
install_node
|
||||
|
||||
PROJECT_DIR=$(pwd)
|
||||
echo -e "Project Directory: ${GREEN}${PROJECT_DIR}${NC}"
|
||||
echo -e "Running User: ${GREEN}${REAL_USER}${NC}"
|
||||
|
||||
if [ ! -f ".env" ] && [ -f ".env.example" ]; then
|
||||
echo -e "${BLUE}Creating .env from .env.example...${NC}"
|
||||
cp .env.example .env
|
||||
fi
|
||||
|
||||
# 10. Reverse Proxy Configuration
|
||||
echo -ne "${YELLOW}Do you want to configure a reverse proxy (Nginx/Caddy)? (y/n): ${NC}"
|
||||
read -r CONF_PROXY
|
||||
if [[ "$CONF_PROXY" =~ ^[Yy]$ ]]; then
|
||||
echo -e "${BLUE}=== Reverse Proxy Configuration ===${NC}"
|
||||
echo -e "${BLUE}Installing NPM dependencies...${NC}"
|
||||
npm install --production
|
||||
|
||||
# Get Domain
|
||||
echo -ne "Enter your domain name (e.g., monitor.example.com): "
|
||||
read -r DOMAIN
|
||||
if [ -z "$DOMAIN" ]; then
|
||||
echo -e "${RED}Error: Domain cannot be empty. Skipping proxy configuration.${NC}"
|
||||
else
|
||||
# Get Port from .env
|
||||
PORT=$(grep "^PORT=" .env | cut -d'=' -f2)
|
||||
write_service_file
|
||||
|
||||
echo -e "${BLUE}Reloading systemd and restarting service...${NC}"
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable "$SERVICE_NAME"
|
||||
sudo systemctl restart "$SERVICE_NAME"
|
||||
|
||||
echo -e "${BLUE}Checking service status...${NC}"
|
||||
sleep 2
|
||||
if sudo systemctl is-active --quiet "$SERVICE_NAME"; then
|
||||
echo -e "${GREEN}SUCCESS: PromdataPanel is now running.${NC}"
|
||||
PORT=$(grep "^PORT=" .env 2>/dev/null | cut -d'=' -f2 || true)
|
||||
PORT=${PORT:-3000}
|
||||
|
||||
# Choose Proxy
|
||||
echo -e "Select Proxy Type:"
|
||||
echo -e " 1) Caddy (Automatic SSL, easy to use)"
|
||||
echo -e " 2) Nginx (Advanced, manual SSL)"
|
||||
echo -ne "Choose (1/2): "
|
||||
read -r PROXY_TYPE
|
||||
|
||||
# Enable HTTPS?
|
||||
echo -ne "Enable HTTPS (SSL)? (y/n): "
|
||||
read -r ENABLE_HTTPS
|
||||
|
||||
if [ "$PROXY_TYPE" == "1" ]; then
|
||||
# Caddy Config
|
||||
CADDY_FILE="Caddyfile"
|
||||
echo -e "${BLUE}Generating Caddyfile...${NC}"
|
||||
|
||||
if [[ "$ENABLE_HTTPS" =~ ^[Yy]$ ]]; then
|
||||
cat <<EOF > "$CADDY_FILE"
|
||||
$DOMAIN {
|
||||
reverse_proxy localhost:$PORT
|
||||
}
|
||||
EOF
|
||||
else
|
||||
cat <<EOF > "$CADDY_FILE"
|
||||
http://$DOMAIN {
|
||||
reverse_proxy localhost:$PORT
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
echo -e "${GREEN}Caddyfile generated at $PROJECT_DIR/$CADDY_FILE${NC}"
|
||||
echo -e "${YELLOW}Tip: Ensure Caddy is installed and pointing to this file.${NC}"
|
||||
|
||||
elif [ "$PROXY_TYPE" == "2" ]; then
|
||||
# Nginx Config
|
||||
echo -ne "Enter Nginx configuration export path (default: ./${DOMAIN}.conf): "
|
||||
read -r NGINX_PATH
|
||||
NGINX_PATH=${NGINX_PATH:-"./${DOMAIN}.conf"}
|
||||
|
||||
echo -e "${BLUE}Generating Nginx configuration...${NC}"
|
||||
|
||||
if [[ "$ENABLE_HTTPS" =~ ^[Yy]$ ]]; then
|
||||
echo -ne "Enter SSL Certificate Path: "
|
||||
read -r SSL_CERT
|
||||
echo -ne "Enter SSL Key Path: "
|
||||
read -r SSL_KEY
|
||||
|
||||
cat <<EOF > "$NGINX_PATH"
|
||||
server {
|
||||
listen 80;
|
||||
server_name $DOMAIN;
|
||||
return 301 https://\$host\$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name $DOMAIN;
|
||||
|
||||
ssl_certificate $SSL_CERT;
|
||||
ssl_certificate_key $SSL_KEY;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:$PORT;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade \$http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host \$host;
|
||||
proxy_set_header X-Real-IP \$remote_addr;
|
||||
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto \$scheme;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
else
|
||||
cat <<EOF > "$NGINX_PATH"
|
||||
server {
|
||||
listen 80;
|
||||
server_name $DOMAIN;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:$PORT;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade \$http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host \$host;
|
||||
proxy_set_header X-Real-IP \$remote_addr;
|
||||
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto \$scheme;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
fi
|
||||
echo -e "${GREEN}Nginx config generated at $NGINX_PATH${NC}"
|
||||
echo -e "${YELLOW}Tip: You can symlink this to /etc/nginx/sites-enabled/ to activate.${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Unknown proxy type selected. Skipping.${NC}"
|
||||
fi
|
||||
IP_ADDR=$(hostname -I 2>/dev/null | awk '{print $1}')
|
||||
if [ -n "${IP_ADDR:-}" ]; then
|
||||
echo -e "Dashboard URL: ${YELLOW}http://${IP_ADDR}:${PORT}${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}FAILED: Service failed to start.${NC}"
|
||||
echo -e "Check logs with: ${BLUE}journalctl -u ${SERVICE_NAME} -xe${NC}"
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}================================================${NC}"
|
||||
echo -e "${GREEN}Setup completed successfully!${NC}"
|
||||
echo -e "${GREEN}Installation completed!${NC}"
|
||||
echo -e "${BLUE}================================================${NC}"
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
{
|
||||
"name": "data-visualization-display-wall",
|
||||
"name": "promdatapanel",
|
||||
"version": "1.0.0",
|
||||
"description": "Data Visualization Display Wall - Multi-Prometheus Monitoring Dashboard",
|
||||
"main": "server/index.js",
|
||||
"scripts": {
|
||||
"dev": "node server/index.js",
|
||||
"start": "node server/index.js",
|
||||
"init-db": "node server/init-db.js"
|
||||
"init-db": "node server/init-db.js",
|
||||
"db-migrate": "node server/init-db.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"axios": "^1.7.0",
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -5,19 +5,21 @@
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta name="description" content="LDNET-GA">
|
||||
<title>LDNET-GA</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link
|
||||
href="https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700;800;900&family=JetBrains+Mono:wght@400;500;600&display=swap"
|
||||
rel="stylesheet">
|
||||
<title></title>
|
||||
<link rel="icon" id="siteFavicon" href="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7">
|
||||
<link rel="stylesheet" href="/css/style.css">
|
||||
<script src="https://cdn.jsdelivr.net/npm/echarts@5.4.3/dist/echarts.min.js"></script>
|
||||
<script src="/vendor/echarts.min.js"></script>
|
||||
<script>
|
||||
// Prevent theme flicker
|
||||
(function () {
|
||||
const savedTheme = localStorage.getItem('theme');
|
||||
const settings = window.SITE_SETTINGS || {};
|
||||
const sanitizeAssetUrl = (url) => {
|
||||
if (!url || typeof url !== 'string') return null;
|
||||
const trimmed = url.trim();
|
||||
if (!trimmed) return null;
|
||||
return /^(https?:|data:image\/|\/)/i.test(trimmed) ? trimmed : null;
|
||||
};
|
||||
const defaultTheme = settings.default_theme || 'dark';
|
||||
let theme = savedTheme || defaultTheme;
|
||||
|
||||
@@ -29,12 +31,69 @@
|
||||
document.documentElement.classList.add('light-theme');
|
||||
}
|
||||
|
||||
// Also apply title if available to prevent flicker
|
||||
// Also apply title and favicon if available to prevent flicker
|
||||
if (settings.page_name) {
|
||||
document.title = settings.page_name;
|
||||
}
|
||||
|
||||
const safeFaviconUrl = sanitizeAssetUrl(settings.favicon_url);
|
||||
if (safeFaviconUrl) {
|
||||
const link = document.getElementById('siteFavicon');
|
||||
if (link) link.href = safeFaviconUrl;
|
||||
}
|
||||
|
||||
// Advanced Anti-Flicker: Wait for header elements to appear
|
||||
const observer = new MutationObserver(function(mutations, me) {
|
||||
const logoText = document.getElementById('logoText');
|
||||
const logoIcon = document.getElementById('logoIconContainer');
|
||||
const header = document.getElementById('header');
|
||||
|
||||
if (logoText || logoIcon) {
|
||||
// If we found either, apply what we have
|
||||
if (logoText) {
|
||||
const displayTitle = settings.title || settings.page_name || '数据可视化展示大屏';
|
||||
logoText.textContent = displayTitle;
|
||||
if (settings.show_page_name === 0) logoText.style.display = 'none';
|
||||
}
|
||||
|
||||
if (logoIcon) {
|
||||
const actualTheme = document.documentElement.classList.contains('light-theme') ? 'light' : 'dark';
|
||||
const logoToUse = sanitizeAssetUrl((actualTheme === 'dark' && settings.logo_url_dark) ? settings.logo_url_dark : (settings.logo_url || null));
|
||||
if (logoToUse) {
|
||||
const img = document.createElement('img');
|
||||
img.src = logoToUse;
|
||||
img.alt = 'Logo';
|
||||
img.className = 'logo-icon-img';
|
||||
logoIcon.replaceChildren(img);
|
||||
} else {
|
||||
// Only if we REALLY have no logo URL, we show the default SVG fallback
|
||||
// (But since it's already in HTML, we just don't touch it or we show it if we hid it)
|
||||
const svg = logoIcon.querySelector('svg');
|
||||
if (svg) svg.style.visibility = 'visible';
|
||||
}
|
||||
}
|
||||
|
||||
// Once found everything or we are past header, we are done
|
||||
if (logoText && logoIcon) me.disconnect();
|
||||
}
|
||||
});
|
||||
observer.observe(document.documentElement, { childList: true, subtree: true });
|
||||
})();
|
||||
</script>
|
||||
<script>
|
||||
// Global Error Logger for remote debugging
|
||||
window.onerror = function(msg, url, line, col, error) {
|
||||
var debugDiv = document.getElementById('js-debug-overlay');
|
||||
if (!debugDiv) {
|
||||
debugDiv = document.createElement('div');
|
||||
debugDiv.id = 'js-debug-overlay';
|
||||
debugDiv.style.cssText = 'position:fixed;top:0;left:0;width:100%;background:rgba(220,38,38,0.95);color:white;z-index:99999;padding:10px;font-family:monospace;font-size:12px;max-height:30vh;overflow:auto;pointer-events:none;';
|
||||
document.body.appendChild(debugDiv);
|
||||
}
|
||||
debugDiv.innerHTML += '<div>[JS ERROR] ' + msg + ' at ' + line + ':' + col + '</div>';
|
||||
return false;
|
||||
};
|
||||
</script>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -51,7 +110,7 @@
|
||||
<div class="header-left">
|
||||
<div class="logo">
|
||||
<div id="logoIconContainer">
|
||||
<svg class="logo-icon" id="logoSvg" viewBox="0 0 32 32" fill="none">
|
||||
<svg class="logo-icon" id="logoSvg" viewBox="0 0 32 32" fill="none" style="visibility: hidden;">
|
||||
<rect x="2" y="2" width="28" height="28" rx="8" stroke="url(#logoGrad)" stroke-width="2.5" />
|
||||
<path d="M8 22 L12 14 L16 18 L20 10 L24 16" stroke="url(#logoGrad)" stroke-width="2"
|
||||
stroke-linecap="round" stroke-linejoin="round" fill="none" />
|
||||
@@ -65,7 +124,7 @@
|
||||
</defs>
|
||||
</svg>
|
||||
</div>
|
||||
<h1 class="logo-text" id="logoText">数据可视化展示大屏</h1>
|
||||
<h1 class="logo-text" id="logoText"></h1>
|
||||
</div>
|
||||
</div>
|
||||
<div class="header-right">
|
||||
@@ -96,6 +155,13 @@
|
||||
<div id="userSection">
|
||||
<button class="btn btn-login" id="btnLogin">登录</button>
|
||||
</div>
|
||||
<button class="btn-refresh-global" id="btnGlobalRefresh" title="全局强制刷新数据" style="display: none;">
|
||||
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<polyline points="23 4 23 10 17 10" />
|
||||
<polyline points="1 20 1 14 7 14" />
|
||||
<path d="M3.51 9a9 9 0 0 1 14.85-3.36L23 10M1 14l4.64 4.36A9 9 0 0 0 20.49 15" />
|
||||
</svg>
|
||||
</button>
|
||||
<button class="btn-settings" id="btnSettings" title="配置管理">
|
||||
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round"
|
||||
stroke-linejoin="round">
|
||||
@@ -203,6 +269,13 @@
|
||||
</svg>
|
||||
网络流量趋势 (24h)
|
||||
</h2>
|
||||
<div class="chart-header-actions">
|
||||
<button class="btn-icon" id="btnRefreshNetwork" title="刷新流量趋势">
|
||||
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" style="width: 16px; height: 16px;">
|
||||
<path d="M23 4v6h-6M1 20v-6h6M3.51 9a9 9 0 0 1 14.85-3.36L23 10M1 14l4.64 4.36A9 9 0 0 0 20.49 15"></path>
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="chart-legend">
|
||||
<span class="legend-item" id="legendRx" style="cursor: pointer;" title="点击切换 接收 (RX) 显示/隐藏"><span
|
||||
@@ -250,7 +323,7 @@
|
||||
<path
|
||||
d="M2 12h20M12 2a15.3 15.3 0 0 1 4 10 15.3 15.3 0 0 1-4 10 15.3 15.3 0 0 1-4-10 15.3 15.3 0 0 1 4-10z" />
|
||||
</svg>
|
||||
全球服务器分布
|
||||
全球骨干分布
|
||||
</h2>
|
||||
<div class="chart-header-actions">
|
||||
<button class="btn-icon" id="btnExpandGlobe" title="放大显示">
|
||||
@@ -326,11 +399,13 @@
|
||||
<th class="sortable" data-sort="disk">磁盘 <span class="sort-icon"></span></th>
|
||||
<th class="sortable" data-sort="netRx">网络 ↓ <span class="sort-icon"></span></th>
|
||||
<th class="sortable" data-sort="netTx">网络 ↑ <span class="sort-icon"></span></th>
|
||||
<th class="sortable" data-sort="conntrack">Conntrack <span class="sort-icon"></span></th>
|
||||
<th class="sortable" data-sort="traffic24h">24h 流量 <span class="sort-icon"></span></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="serverTableBody">
|
||||
<tr class="empty-row">
|
||||
<td colspan="8">暂无数据 - 请先配置 Prometheus 数据源</td>
|
||||
<td colspan="10">暂无数据 - 请先配置 Prometheus 数据源</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
@@ -354,6 +429,20 @@
|
||||
</section>
|
||||
</main>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer class="site-footer">
|
||||
<div class="footer-content">
|
||||
<div class="copyright">© <span id="copyrightYear"></span> LDNET-GA-Service. All rights reserved.</div>
|
||||
<div class="filings">
|
||||
<a href="http://www.beian.gov.cn/portal/registerSystemInfo" target="_blank" id="psFilingDisplay" style="display: none;">
|
||||
<span id="psFilingText"></span>
|
||||
</a>
|
||||
<span class="filing-sep"></span>
|
||||
<a href="https://beian.miit.gov.cn/" target="_blank" id="icpFilingDisplay" style="display: none;"></a>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- Settings Modal -->
|
||||
<div class="modal-overlay" id="settingsModal">
|
||||
<div class="modal">
|
||||
@@ -361,6 +450,7 @@
|
||||
<div class="modal-tabs">
|
||||
<button class="modal-tab active" data-tab="prom">数据源管理</button>
|
||||
<button class="modal-tab" data-tab="site">大屏设置</button>
|
||||
<button class="modal-tab" data-tab="security">安全设置</button>
|
||||
<button class="modal-tab" data-tab="latency">延迟线路管理</button>
|
||||
<button class="modal-tab" data-tab="auth">账号安全</button>
|
||||
</div>
|
||||
@@ -393,17 +483,31 @@
|
||||
<div class="form-row">
|
||||
<div class="form-group form-group-wide">
|
||||
<label for="sourceDesc">描述 (可选)</label>
|
||||
<input type="text" id="sourceDesc" placeholder="数据源描述" autocomplete="off">
|
||||
<input type="text" id="sourceDesc" placeholder="记录关于此数据源的备注信息" autocomplete="off">
|
||||
</div>
|
||||
<div class="form-group" id="serverSourceOption"
|
||||
style="display: flex; align-items: flex-end; padding-bottom: 8px;">
|
||||
<label
|
||||
style="display: flex; align-items: center; gap: 8px; cursor: pointer; font-size: 0.85rem; color: var(--text-secondary); white-space: nowrap;">
|
||||
<input type="checkbox" id="isServerSource" checked
|
||||
style="width: 16px; height: 16px; accent-color: var(--accent-indigo);">
|
||||
<span>用于服务器展示</span>
|
||||
</div>
|
||||
<div class="form-row" id="serverSourceOption" style="margin-top: 4px;">
|
||||
<div class="form-group form-group-wide">
|
||||
<div class="source-options-clean-row">
|
||||
<label class="source-option-item" title="将此数据源的服务器指标聚合到首页总览中">
|
||||
<div class="switch-wrapper">
|
||||
<input type="checkbox" id="isOverviewSource" checked class="switch-input">
|
||||
<div class="switch-label"></div>
|
||||
</div>
|
||||
<span class="source-option-label">加入总览统计</span>
|
||||
</label>
|
||||
<label class="source-option-item" title="在服务器详情列表中显示此数据源的服务器">
|
||||
<div class="switch-wrapper">
|
||||
<input type="checkbox" id="isDetailSource" checked class="switch-input">
|
||||
<div class="switch-label"></div>
|
||||
</div>
|
||||
<span class="source-option-label">加入详情展示</span>
|
||||
</label>
|
||||
</div>
|
||||
<input type="checkbox" id="isServerSource" checked disabled style="display: none;">
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-row" style="margin-top: 8px;">
|
||||
<div class="form-actions">
|
||||
<button class="btn btn-test" id="btnTest">测试连接</button>
|
||||
<button class="btn btn-add" id="btnAdd">添加</button>
|
||||
@@ -434,16 +538,37 @@
|
||||
<input type="text" id="siteTitleInput" placeholder="例:数据可视化展示大屏">
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="logoUrlInput">Logo URL (图片链接,为空则显示默认图标)</label>
|
||||
<input type="url" id="logoUrlInput" placeholder="https://example.com/logo.png">
|
||||
<label for="showPageNameInput">是否显示左上角标题</label>
|
||||
<select id="showPageNameInput"
|
||||
style="padding: 10px 14px; background: var(--bg-input); border: 1px solid var(--border-color); border-radius: var(--radius-sm); color: var(--text-primary); width: 100%;">
|
||||
<option value="1">显示 (Show)</option>
|
||||
<option value="0">隐藏 (Hide)</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="defaultThemeInput">默认主题</label>
|
||||
<label for="logoUrlInput">Logo URL (白天/默认,支持图片链接)</label>
|
||||
<input type="url" id="logoUrlInput" placeholder="https://example.com/logo_light.png">
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="logoUrlDarkInput">Logo URL (黑夜模式,可为空则使用默认)</label>
|
||||
<input type="url" id="logoUrlDarkInput" placeholder="https://example.com/logo_dark.png">
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="faviconUrlInput">Favicon URL (浏览器标签页图标)</label>
|
||||
<input type="url" id="faviconUrlInput" placeholder="https://example.com/favicon.ico">
|
||||
</div>
|
||||
<div class="settings-section" style="margin-top: 25px; border-top: 1px solid var(--border-color); padding-top: 20px;">
|
||||
<h4 style="font-size: 0.85rem; color: var(--accent-indigo); margin-bottom: 15px; text-transform: uppercase; letter-spacing: 0.5px;">界面外观 (Appearance)</h4>
|
||||
<div class="form-group">
|
||||
<label for="defaultThemeInput">色彩主题模式</label>
|
||||
<select id="defaultThemeInput"
|
||||
style="padding: 10px 14px; background: var(--bg-input); border: 1px solid var(--border-color); border-radius: var(--radius-sm); color: var(--text-primary);">
|
||||
<option value="dark">默认夜间模式</option>
|
||||
<option value="light">默认白天模式</option>
|
||||
style="padding: 10px 14px; background: var(--bg-input); border: 1px solid var(--border-color); border-radius: var(--radius-sm); color: var(--text-primary); width: 100%;">
|
||||
<option value="auto">跟随系统主题 (Sync with OS)</option>
|
||||
<option value="dark">强制深色模式 (Always Dark)</option>
|
||||
<option value="light">强制浅色模式 (Always Light)</option>
|
||||
</select>
|
||||
<p style="font-size: 0.72rem; color: var(--text-muted); margin-top: 6px;">选择“跟随系统”后,应用将自动同步您操作系统或浏览器的黑暗/白天模式设置。</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="show95BandwidthInput">24h趋势图默认显示 95计费线</label>
|
||||
@@ -460,8 +585,22 @@
|
||||
<option value="tx">仅统计上行 (TX)</option>
|
||||
<option value="rx">仅统计下行 (RX)</option>
|
||||
<option value="both">统计上行+下行 (Sum)</option>
|
||||
<option value="max">出入取大 (Max)</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="psFilingInput">公安备案号 (如:京公网安备 11010102000001号)</label>
|
||||
<input type="text" id="psFilingInput" placeholder="请输入公安备案号">
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="icpFilingInput">ICP 备案号 (如:京ICP备12345678号)</label>
|
||||
<input type="text" id="icpFilingInput" placeholder="请输入 ICP 备案号">
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="cdnUrlInput">静态资源 CDN 地址 (例如: https://cdn.example.com)</label>
|
||||
<input type="url" id="cdnUrlInput" placeholder="留空则使用本地服务器资源">
|
||||
<p style="font-size: 0.72rem; color: var(--text-muted); margin-top: 6px;">开启后,页面中的 JS/CSS/图片等资源将尝试从该 CDN 加载。请确保 CDN 已正确镜像相关资源。</p>
|
||||
</div>
|
||||
<div class="form-actions" style="margin-top: 25px; display: flex; justify-content: flex-end;">
|
||||
<button class="btn btn-add" id="btnSaveSiteSettings">保存基础设置</button>
|
||||
</div>
|
||||
@@ -469,6 +608,65 @@
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Security Settings Tab -->
|
||||
<div class="tab-content" id="tab-security">
|
||||
<div class="security-settings-form">
|
||||
<h3>安全与隐私设置</h3>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="requireLoginForServerDetailsInput">服务器详情是否仅登录后可查看</label>
|
||||
<select id="requireLoginForServerDetailsInput"
|
||||
style="padding: 10px 14px; background: var(--bg-input); border: 1px solid var(--border-color); border-radius: var(--radius-sm); color: var(--text-primary); width: 100%;">
|
||||
<option value="1">仅登录后可查看</option>
|
||||
<option value="0">允许公开查看</option>
|
||||
</select>
|
||||
<p style="font-size: 0.72rem; color: var(--text-muted); margin-top: 6px;">开启后,未登录访客仍可看到大屏总览,但点击单台服务器时需要先登录。</p>
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="showServerIpInput">是否在服务器详情中显示公网 IP</label>
|
||||
<select id="showServerIpInput"
|
||||
style="padding: 10px 14px; background: var(--bg-input); border: 1px solid var(--border-color); border-radius: var(--radius-sm); color: var(--text-primary); width: 100%;">
|
||||
<option value="1">显示 (Show)</option>
|
||||
<option value="0">隐藏 (Hide)</option>
|
||||
</select>
|
||||
<p style="font-size: 0.72rem; color: var(--text-muted); margin-top: 6px;">开启后,点击服务器详情时会显示该服务器的公网 IP 地址。</p>
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="ipMetricNameInput">自定义 IP 采集指标 (可选)</label>
|
||||
<input type="text" id="ipMetricNameInput" placeholder="例:node_network_address_info">
|
||||
<p style="font-size: 0.72rem; color: var(--text-muted); margin-top: 6px;">如果您的 Prometheus 中有专门记录 IP 的指标,请在此输入。留空则尝试自动发现。</p>
|
||||
</div>
|
||||
<div class="form-group" style="margin-top: 15px;">
|
||||
<label for="ipLabelNameInput">IP 指标中的 Label 名称</label>
|
||||
<input type="text" id="ipLabelNameInput" placeholder="默认:address">
|
||||
</div>
|
||||
<div class="form-actions" style="margin-top: 25px; display: flex; justify-content: flex-end;">
|
||||
<button class="btn btn-add" id="btnSaveSecuritySettings">保存安全设置</button>
|
||||
</div>
|
||||
<div class="form-message" id="securitySettingsMessage"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Custom Detail Metrics Tab -->
|
||||
<div class="tab-content" id="tab-details-metrics">
|
||||
<div class="metrics-settings-form">
|
||||
<div style="display: flex; justify-content: space-between; align-items: center; margin-bottom: 20px;">
|
||||
<h3 style="margin: 0;">服务器详情指标配置</h3>
|
||||
<button class="btn btn-add" id="btnAddCustomMetric" style="padding: 6px 12px; font-size: 0.8rem;">
|
||||
<i class="fas fa-plus"></i> 添加指标
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div id="customMetricsList" class="custom-metrics-list" style="max-height: 400px; overflow-y: auto; padding-right: 5px;">
|
||||
<!-- Dynamic rows will be added here -->
|
||||
</div>
|
||||
|
||||
<div class="form-actions" style="margin-top: 25px; display: flex; justify-content: flex-end;">
|
||||
<button class="btn btn-add" id="btnSaveCustomMetrics">保存指标配置</button>
|
||||
</div>
|
||||
<div class="form-message" id="customMetricsMessage"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Latency Routes Tab -->
|
||||
<div class="tab-content" id="tab-latency">
|
||||
<div class="latency-settings-form">
|
||||
@@ -479,11 +677,12 @@
|
||||
style="background: rgba(255,255,255,0.02); padding: 15px; border-radius: 8px; margin-bottom: 20px; border: 1px solid var(--border-color);">
|
||||
<div class="form-row">
|
||||
<div class="form-group" style="flex: 1.5;">
|
||||
<label>数据源 (Blackbox)</label>
|
||||
<label>探测用服务器</label>
|
||||
<select id="routeSourceSelect"
|
||||
style="padding: 10px 14px; background: var(--bg-input); border: 1px solid var(--border-color); border-radius: var(--radius-sm); color: var(--text-primary);">
|
||||
<option value="">-- 选择数据源 --</option>
|
||||
</select>
|
||||
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label>起航点</label>
|
||||
|
||||
@@ -4,9 +4,6 @@
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>系统初始化 - 数据可视化展示大屏</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@300;400;500;600;700;800;900&family=JetBrains+Mono:wght@400;500;600&display=swap" rel="stylesheet">
|
||||
<link rel="stylesheet" href="/css/style.css">
|
||||
<style>
|
||||
body {
|
||||
@@ -70,6 +67,33 @@
|
||||
justify-content: center;
|
||||
padding: 10px 0;
|
||||
}
|
||||
|
||||
@media (max-width: 480px) {
|
||||
body {
|
||||
align-items: flex-start;
|
||||
padding: 16px 12px;
|
||||
}
|
||||
.init-container {
|
||||
padding: 24px 18px;
|
||||
border-radius: 10px;
|
||||
max-width: 100%;
|
||||
}
|
||||
.init-header h2 {
|
||||
font-size: 18px;
|
||||
}
|
||||
.init-header p {
|
||||
font-size: 12px;
|
||||
}
|
||||
.form-row {
|
||||
flex-direction: column;
|
||||
}
|
||||
.actions {
|
||||
flex-direction: column;
|
||||
}
|
||||
.actions .btn {
|
||||
width: 100%;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
1072
public/js/app.js
1072
public/js/app.js
File diff suppressed because it is too large
Load Diff
@@ -18,13 +18,91 @@ class AreaChart {
|
||||
|
||||
this.prevMaxVal = 0;
|
||||
this.currentMaxVal = 0;
|
||||
this.lastDataHash = ''; // Fingerprint for optimization
|
||||
|
||||
// Use debounced resize for performance and safety
|
||||
this._resize = typeof debounce === 'function' ? debounce(this.resize.bind(this), 100) : this.resize.bind(this);
|
||||
window.addEventListener('resize', this._resize);
|
||||
|
||||
// Drag zoom support
|
||||
this.isDraggingP95 = false;
|
||||
this.customMaxVal = null;
|
||||
|
||||
this.onPointerDown = this.onPointerDown.bind(this);
|
||||
this.onPointerMove = this.onPointerMove.bind(this);
|
||||
this.onPointerUp = this.onPointerUp.bind(this);
|
||||
|
||||
this.canvas.addEventListener('pointerdown', this.onPointerDown);
|
||||
window.addEventListener('pointermove', this.onPointerMove);
|
||||
window.addEventListener('pointerup', this.onPointerUp);
|
||||
|
||||
this.resize();
|
||||
}
|
||||
|
||||
onPointerDown(e) {
|
||||
if (!this.showP95 || !this.p95) return;
|
||||
const rect = this.canvas.getBoundingClientRect();
|
||||
const scaleY = this.height / rect.height;
|
||||
const y = (e.clientY - rect.top) * scaleY;
|
||||
|
||||
const p = this.padding;
|
||||
const chartH = this.height - p.top - p.bottom;
|
||||
|
||||
// Calculate current P95 Y position
|
||||
const k = 1024;
|
||||
const currentMaxVal = (this.customMaxVal !== null ? this.customMaxVal : (this.currentMaxVal || 1024));
|
||||
let unitIdx = Math.floor(Math.log(Math.max(1, currentMaxVal)) / Math.log(k));
|
||||
unitIdx = Math.max(0, Math.min(unitIdx, 4));
|
||||
const unitFactor = Math.pow(k, unitIdx);
|
||||
const rawValInUnit = (currentMaxVal * 1.15) / unitFactor;
|
||||
let niceMaxInUnit;
|
||||
if (rawValInUnit <= 1) niceMaxInUnit = 1;
|
||||
else if (rawValInUnit <= 2) niceMaxInUnit = 2;
|
||||
else if (rawValInUnit <= 5) niceMaxInUnit = 5;
|
||||
else if (rawValInUnit <= 10) niceMaxInUnit = 10;
|
||||
else if (rawValInUnit <= 20) niceMaxInUnit = 20;
|
||||
else if (rawValInUnit <= 50) niceMaxInUnit = 50;
|
||||
else if (rawValInUnit <= 100) niceMaxInUnit = 100;
|
||||
else if (rawValInUnit <= 200) niceMaxInUnit = 200;
|
||||
else if (rawValInUnit <= 500) niceMaxInUnit = 500;
|
||||
else if (rawValInUnit <= 1000) niceMaxInUnit = 1000;
|
||||
else niceMaxInUnit = Math.ceil(rawValInUnit / 100) * 100;
|
||||
|
||||
const displayMaxVal = this.customMaxVal !== null ? this.customMaxVal : (niceMaxInUnit * unitFactor);
|
||||
const p95Y = p.top + chartH - (this.p95 / (displayMaxVal || 1)) * chartH;
|
||||
|
||||
if (Math.abs(y - p95Y) < 25) {
|
||||
this.isDraggingP95 = true;
|
||||
this.canvas.style.cursor = 'ns-resize';
|
||||
this.canvas.setPointerCapture(e.pointerId);
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
}
|
||||
}
|
||||
|
||||
onPointerMove(e) {
|
||||
if (!this.isDraggingP95) return;
|
||||
const rect = this.canvas.getBoundingClientRect();
|
||||
const scaleY = this.height / rect.height;
|
||||
const y = (e.clientY - rect.top) * scaleY;
|
||||
const p = this.padding;
|
||||
const chartH = this.height - p.top - p.bottom;
|
||||
|
||||
const dy = p.top + chartH - y;
|
||||
if (dy > 10) {
|
||||
this.customMaxVal = (this.p95 * chartH) / dy;
|
||||
this.draw();
|
||||
}
|
||||
}
|
||||
|
||||
onPointerUp(e) {
|
||||
if (this.isDraggingP95) {
|
||||
this.isDraggingP95 = false;
|
||||
this.canvas.style.cursor = '';
|
||||
this.canvas.releasePointerCapture(e.pointerId);
|
||||
}
|
||||
}
|
||||
|
||||
resize() {
|
||||
const rect = this.canvas.parentElement.getBoundingClientRect();
|
||||
this.width = rect.width;
|
||||
@@ -40,6 +118,14 @@ class AreaChart {
|
||||
setData(data) {
|
||||
if (!data || !data.timestamps) return;
|
||||
|
||||
// 1. Data Fingerprinting: Skip redundant updates to save GPU/CPU
|
||||
const fingerprint = data.timestamps.length + '_' +
|
||||
(data.rx.length > 0 ? data.rx[data.rx.length - 1] : 0) + '_' +
|
||||
(data.tx.length > 0 ? data.tx[data.tx.length - 1] : 0);
|
||||
|
||||
if (fingerprint === this.lastDataHash) return;
|
||||
this.lastDataHash = fingerprint;
|
||||
|
||||
// Store old data for smooth transition before updating this.data
|
||||
// Only clone if there is data to clone; otherwise use empty set
|
||||
if (this.data && this.data.timestamps && this.data.timestamps.length > 0) {
|
||||
@@ -55,8 +141,8 @@ class AreaChart {
|
||||
// Smoothly transition max value context too
|
||||
this.prevMaxVal = this.currentMaxVal || 0;
|
||||
|
||||
// Downsample if data is too dense (target ~1500 points for performance)
|
||||
const MAX_POINTS = 1500;
|
||||
// Downsample if data is too dense (target ~500 points for GPU performance)
|
||||
const MAX_POINTS = 500;
|
||||
if (data.timestamps.length > MAX_POINTS) {
|
||||
const skip = Math.ceil(data.timestamps.length / MAX_POINTS);
|
||||
const downsampled = { timestamps: [], rx: [], tx: [] };
|
||||
@@ -84,6 +170,8 @@ class AreaChart {
|
||||
combined = data.tx.map(t => t || 0);
|
||||
} else if (this.p95Type === 'rx') {
|
||||
combined = data.rx.map(r => r || 0);
|
||||
} else if (this.p95Type === 'max') {
|
||||
combined = data.tx.map((t, i) => Math.max(t || 0, data.rx[i] || 0));
|
||||
} else {
|
||||
combined = data.tx.map((t, i) => (t || 0) + (data.rx[i] || 0));
|
||||
}
|
||||
@@ -103,7 +191,7 @@ class AreaChart {
|
||||
animate() {
|
||||
if (this.animFrame) cancelAnimationFrame(this.animFrame);
|
||||
const start = performance.now();
|
||||
const duration = 800;
|
||||
const duration = 400; // Shorter animation = less GPU time
|
||||
|
||||
const step = (now) => {
|
||||
const elapsed = now - start;
|
||||
@@ -153,13 +241,10 @@ class AreaChart {
|
||||
let unitIdx = Math.floor(Math.log(Math.max(1, maxDataVal)) / Math.log(k));
|
||||
unitIdx = Math.max(0, Math.min(unitIdx, sizes.length - 1));
|
||||
const unitFactor = Math.pow(k, unitIdx);
|
||||
const unitLabel = sizes[unitIdx];
|
||||
|
||||
// Get value in current units and find a "nice" round max
|
||||
// Use 1.15 cushion
|
||||
const rawValInUnit = (maxDataVal * 1.15) / unitFactor;
|
||||
let niceMaxInUnit;
|
||||
|
||||
if (rawValInUnit <= 1) niceMaxInUnit = 1;
|
||||
else if (rawValInUnit <= 2) niceMaxInUnit = 2;
|
||||
else if (rawValInUnit <= 5) niceMaxInUnit = 5;
|
||||
@@ -172,7 +257,16 @@ class AreaChart {
|
||||
else if (rawValInUnit <= 1000) niceMaxInUnit = 1000;
|
||||
else niceMaxInUnit = Math.ceil(rawValInUnit / 100) * 100;
|
||||
|
||||
const maxVal = niceMaxInUnit * unitFactor;
|
||||
let maxVal = niceMaxInUnit * unitFactor;
|
||||
if (this.customMaxVal !== null) {
|
||||
maxVal = this.customMaxVal;
|
||||
}
|
||||
|
||||
// Recalculate units based on final maxVal (could be zoomed)
|
||||
let finalUnitIdx = Math.floor(Math.log(Math.max(1, maxVal)) / Math.log(k));
|
||||
finalUnitIdx = Math.max(0, Math.min(finalUnitIdx, sizes.length - 1));
|
||||
const finalFactor = Math.pow(k, finalUnitIdx);
|
||||
const finalUnitLabel = sizes[finalUnitIdx];
|
||||
|
||||
const len = timestamps.length;
|
||||
const xStep = chartW / (len - 1);
|
||||
@@ -196,14 +290,14 @@ class AreaChart {
|
||||
ctx.lineTo(p.left + chartW, y);
|
||||
ctx.stroke();
|
||||
|
||||
// Y-axis labels - share the same unit for readability
|
||||
const valInUnit = niceMaxInUnit * (1 - i / gridLines);
|
||||
// Y-axis labels
|
||||
const v = maxVal * (1 - i / gridLines);
|
||||
const valInUnit = v / finalFactor;
|
||||
ctx.fillStyle = '#5a6380';
|
||||
ctx.font = '10px "JetBrains Mono", monospace';
|
||||
ctx.textAlign = 'right';
|
||||
|
||||
// Format: "X.X MB/s" or "X MB/s"
|
||||
const label = (valInUnit % 1 === 0 ? valInUnit : valInUnit.toFixed(1)) + ' ' + unitLabel;
|
||||
const label = (valInUnit % 1 === 0 ? valInUnit : valInUnit.toFixed(1)) + ' ' + finalUnitLabel;
|
||||
ctx.fillText(label, p.left - 10, y + 3);
|
||||
}
|
||||
|
||||
@@ -216,47 +310,42 @@ class AreaChart {
|
||||
const x = getX(i);
|
||||
ctx.fillText(formatTime(timestamps[i]), x, h - 8);
|
||||
}
|
||||
// Always show last label
|
||||
ctx.fillText(formatTime(timestamps[len - 1]), getX(len - 1), h - 8);
|
||||
|
||||
const getPVal = (arr, i) => (arr && i < arr.length) ? arr[i] : 0;
|
||||
// Draw data areas with clipping
|
||||
ctx.save();
|
||||
ctx.beginPath();
|
||||
ctx.rect(p.left, p.top, chartW, chartH);
|
||||
ctx.clip();
|
||||
|
||||
// Draw TX area
|
||||
if (this.showTx) {
|
||||
this.drawArea(ctx, tx, this.prevData ? this.prevData.tx : null, getX, getY, chartH, p,
|
||||
'rgba(99, 102, 241, 0.25)', 'rgba(99, 102, 241, 0.02)',
|
||||
'#6366f1', len);
|
||||
'rgba(99, 102, 241, 0.25)', 'rgba(99, 102, 241, 0.02)', '#6366f1', len);
|
||||
}
|
||||
|
||||
// Draw RX area (on top)
|
||||
if (this.showRx) {
|
||||
this.drawArea(ctx, rx, this.prevData ? this.prevData.rx : null, getX, getY, chartH, p,
|
||||
'rgba(6, 182, 212, 0.25)', 'rgba(6, 182, 212, 0.02)',
|
||||
'#06b6d4', len);
|
||||
'rgba(6, 182, 212, 0.25)', 'rgba(6, 182, 212, 0.02)', '#06b6d4', len);
|
||||
}
|
||||
ctx.restore();
|
||||
|
||||
// Draw P95 line
|
||||
if (this.showP95 && this.p95 && this.animProgress === 1) {
|
||||
if (this.showP95 && this.p95 && (this.animProgress === 1 || this.isDraggingP95)) {
|
||||
const p95Y = getY(this.p95);
|
||||
// Only draw if within visible range
|
||||
if (p95Y >= p.top && p95Y <= p.top + chartH) {
|
||||
ctx.save();
|
||||
ctx.beginPath();
|
||||
ctx.setLineDash([6, 4]);
|
||||
ctx.strokeStyle = 'rgba(244, 63, 94, 0.85)'; // --accent-rose
|
||||
ctx.strokeStyle = 'rgba(244, 63, 94, 0.85)';
|
||||
ctx.lineWidth = 1.5;
|
||||
ctx.moveTo(p.left, p95Y);
|
||||
ctx.lineTo(p.left + chartW, p95Y);
|
||||
ctx.stroke();
|
||||
|
||||
// P95 label background
|
||||
const label = '95计费: ' + (window.formatBandwidth ? window.formatBandwidth(this.p95) : this.p95.toFixed(2));
|
||||
ctx.font = 'bold 11px "JetBrains Mono", monospace';
|
||||
const metrics = ctx.measureText(label);
|
||||
ctx.fillStyle = 'rgba(244, 63, 94, 0.15)';
|
||||
ctx.fillRect(p.left + 8, p95Y - 20, metrics.width + 12, 18);
|
||||
|
||||
// P95 label text
|
||||
ctx.fillStyle = '#f43f5e';
|
||||
ctx.textAlign = 'left';
|
||||
ctx.fillText(label, p.left + 14, p95Y - 7);
|
||||
@@ -268,7 +357,7 @@ class AreaChart {
|
||||
drawArea(ctx, values, prevValues, getX, getY, chartH, p, fillColorTop, fillColorBottom, strokeColor, len) {
|
||||
if (!values || values.length === 0) return;
|
||||
|
||||
const useSimple = len > 250;
|
||||
const useSimple = len > 80;
|
||||
const getPVal = (i) => (prevValues && i < prevValues.length) ? prevValues[i] : 0;
|
||||
|
||||
// Fill
|
||||
@@ -330,11 +419,12 @@ class MetricChart {
|
||||
this.data = { timestamps: [], values: [], series: null };
|
||||
this.unit = unit; // '%', 'B/s', etc.
|
||||
this.dpr = window.devicePixelRatio || 1;
|
||||
this.padding = { top: 10, right: 10, bottom: 20, left: 60 };
|
||||
this.padding = { top: 10, right: 10, bottom: 35, left: 60 };
|
||||
this.animProgress = 0;
|
||||
|
||||
this.prevMaxVal = 0;
|
||||
this.currentMaxVal = 0;
|
||||
this.lastDataHash = ''; // Fingerprint for optimization
|
||||
|
||||
// Use debounced resize for performance and safety
|
||||
this._resize = typeof debounce === 'function' ? debounce(this.resize.bind(this), 100) : this.resize.bind(this);
|
||||
@@ -358,6 +448,15 @@ class MetricChart {
|
||||
}
|
||||
|
||||
setData(data) {
|
||||
if (!data || !data.timestamps) return;
|
||||
|
||||
// 1. Simple fingerprinting to avoid constant re-animation of same data
|
||||
const lastVal = data.values && data.values.length > 0 ? data.values[data.values.length - 1] : 0;
|
||||
const fingerprint = data.timestamps.length + '_' + lastVal + '_' + (data.series ? 's' : 'v');
|
||||
|
||||
if (fingerprint === this.lastDataHash) return;
|
||||
this.lastDataHash = fingerprint;
|
||||
|
||||
if (this.data && this.data.values && this.data.values.length > 0) {
|
||||
this.prevData = JSON.parse(JSON.stringify(this.data));
|
||||
} else {
|
||||
@@ -388,7 +487,7 @@ class MetricChart {
|
||||
animate() {
|
||||
if (this.animFrame) cancelAnimationFrame(this.animFrame);
|
||||
const start = performance.now();
|
||||
const duration = 500;
|
||||
const duration = 300; // Snappier and lighter on GPU
|
||||
const step = (now) => {
|
||||
const elapsed = now - start;
|
||||
this.animProgress = Math.min(elapsed / duration, 1);
|
||||
@@ -456,12 +555,30 @@ class MetricChart {
|
||||
} else {
|
||||
label = v.toFixed(0) + this.unit;
|
||||
}
|
||||
} else if (this.unit === '%' && this.totalValue) {
|
||||
// 当提供了总量时,将百分比转换为实际数值显示(例如内存显示 2GB 而非 25%)
|
||||
const absVal = v * (this.totalValue / 100);
|
||||
label = window.formatBytes ? window.formatBytes(absVal) : absVal.toFixed(0);
|
||||
} else {
|
||||
label = (v >= 1000 ? (v / 1000).toFixed(1) + 'k' : v.toFixed(v < 10 && v > 0 ? 1 : 0)) + this.unit;
|
||||
}
|
||||
ctx.fillText(label, p.left - 8, y + 3);
|
||||
}
|
||||
|
||||
// X-axis Timeline
|
||||
ctx.fillStyle = '#5a6380';
|
||||
ctx.font = '9px "JetBrains Mono", monospace';
|
||||
ctx.textAlign = 'center';
|
||||
const labelInterval = Math.max(1, Math.floor(len / 5));
|
||||
for (let i = 0; i < len; i += labelInterval) {
|
||||
const x = getX(i);
|
||||
ctx.fillText(formatTime(timestamps[i]), x, h - 8);
|
||||
}
|
||||
// Always show last label if not already shown
|
||||
if ((len - 1) % labelInterval !== 0) {
|
||||
ctx.fillText(formatTime(timestamps[len - 1]), getX(len - 1), h - 8);
|
||||
}
|
||||
|
||||
if (series) {
|
||||
// Draw Stacked Area
|
||||
const modes = [
|
||||
@@ -527,7 +644,7 @@ class MetricChart {
|
||||
});
|
||||
|
||||
} else {
|
||||
const useSimple = len > 250;
|
||||
const useSimple = len > 100;
|
||||
const prevVals = this.prevData ? this.prevData.values : null;
|
||||
const getPVal = (i) => (prevVals && i < prevVals.length) ? prevVals[i] : 0;
|
||||
|
||||
|
||||
45
public/vendor/echarts.min.js
vendored
Normal file
45
public/vendor/echarts.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
1
public/vendor/world.json
vendored
Normal file
1
public/vendor/world.json
vendored
Normal file
File diff suppressed because one or more lines are too long
@@ -1,186 +0,0 @@
|
||||
/**
|
||||
* Database Integrity Check
|
||||
* Runs at startup to ensure all required tables exist.
|
||||
* Recreates the database if any tables are missing.
|
||||
*/
|
||||
require('dotenv').config();
|
||||
const mysql = require('mysql2/promise');
|
||||
const db = require('./db');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
|
||||
const REQUIRED_TABLES = [
|
||||
'users',
|
||||
'prometheus_sources',
|
||||
'site_settings',
|
||||
'traffic_stats',
|
||||
'server_locations',
|
||||
'latency_routes'
|
||||
];
|
||||
|
||||
async function checkAndFixDatabase() {
|
||||
const envPath = path.join(__dirname, '..', '.env');
|
||||
if (!fs.existsSync(envPath)) return;
|
||||
|
||||
try {
|
||||
// Check tables
|
||||
const [rows] = await db.query("SHOW TABLES");
|
||||
const existingTables = rows.map(r => Object.values(r)[0]);
|
||||
|
||||
const missingTables = REQUIRED_TABLES.filter(t => !existingTables.includes(t));
|
||||
|
||||
if (missingTables.length > 0) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing tables: ${missingTables.join(', ')}. Creating them...`);
|
||||
|
||||
for (const table of missingTables) {
|
||||
await createTable(table);
|
||||
}
|
||||
console.log(`[Database Integrity] ✅ Missing tables created.`);
|
||||
}
|
||||
|
||||
// Check for is_server_source and type in prometheus_sources
|
||||
const [promColumns] = await db.query("SHOW COLUMNS FROM prometheus_sources");
|
||||
const promColumnNames = promColumns.map(c => c.Field);
|
||||
|
||||
if (!promColumnNames.includes('is_server_source')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'is_server_source' in 'prometheus_sources'. Adding it...`);
|
||||
await db.query("ALTER TABLE prometheus_sources ADD COLUMN is_server_source TINYINT(1) DEFAULT 1 AFTER description");
|
||||
console.log(`[Database Integrity] ✅ Column 'is_server_source' added.`);
|
||||
}
|
||||
|
||||
if (!promColumnNames.includes('type')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'type' in 'prometheus_sources'. Adding it...`);
|
||||
await db.query("ALTER TABLE prometheus_sources ADD COLUMN type VARCHAR(50) DEFAULT 'prometheus' AFTER is_server_source");
|
||||
console.log(`[Database Integrity] ✅ Column 'type' added.`);
|
||||
}
|
||||
|
||||
// Check for new columns in site_settings
|
||||
const [columns] = await db.query("SHOW COLUMNS FROM site_settings");
|
||||
const columnNames = columns.map(c => c.Field);
|
||||
if (!columnNames.includes('show_95_bandwidth')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'show_95_bandwidth' in 'site_settings'. Adding it...`);
|
||||
await db.query("ALTER TABLE site_settings ADD COLUMN show_95_bandwidth TINYINT(1) DEFAULT 0 AFTER default_theme");
|
||||
console.log(`[Database Integrity] ✅ Column 'show_95_bandwidth' added.`);
|
||||
}
|
||||
if (!columnNames.includes('p95_type')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'p95_type' in 'site_settings'. Adding it...`);
|
||||
await db.query("ALTER TABLE site_settings ADD COLUMN p95_type VARCHAR(20) DEFAULT 'tx' AFTER show_95_bandwidth");
|
||||
console.log(`[Database Integrity] ✅ Column 'p95_type' added.`);
|
||||
}
|
||||
if (!columnNames.includes('blackbox_source_id')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'blackbox_source_id' in 'site_settings'. Adding it...`);
|
||||
await db.query("ALTER TABLE site_settings ADD COLUMN blackbox_source_id INT AFTER p95_type");
|
||||
console.log(`[Database Integrity] ✅ Column 'blackbox_source_id' added.`);
|
||||
}
|
||||
if (!columnNames.includes('latency_source')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'latency_source' in 'site_settings'. Adding it...`);
|
||||
await db.query("ALTER TABLE site_settings ADD COLUMN latency_source VARCHAR(100) AFTER blackbox_source_id");
|
||||
console.log(`[Database Integrity] ✅ Column 'latency_source' added.`);
|
||||
}
|
||||
if (!columnNames.includes('latency_dest')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'latency_dest' in 'site_settings'. Adding it...`);
|
||||
await db.query("ALTER TABLE site_settings ADD COLUMN latency_dest VARCHAR(100) AFTER latency_source");
|
||||
console.log(`[Database Integrity] ✅ Column 'latency_dest' added.`);
|
||||
}
|
||||
if (!columnNames.includes('latency_target')) {
|
||||
console.log(`[Database Integrity] ⚠️ Missing column 'latency_target' in 'site_settings'. Adding it...`);
|
||||
await db.query("ALTER TABLE site_settings ADD COLUMN latency_target VARCHAR(255) AFTER latency_dest");
|
||||
console.log(`[Database Integrity] ✅ Column 'latency_target' added.`);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('[Database Integrity] ❌ Error checking integrity:', err.message);
|
||||
}
|
||||
}
|
||||
|
||||
async function createTable(tableName) {
|
||||
console.log(` - Creating table "${tableName}"...`);
|
||||
switch (tableName) {
|
||||
case 'users':
|
||||
await db.query(`
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
username VARCHAR(255) NOT NULL UNIQUE,
|
||||
password VARCHAR(255) NOT NULL,
|
||||
salt VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
break;
|
||||
case 'prometheus_sources':
|
||||
await db.query(`
|
||||
CREATE TABLE IF NOT EXISTS prometheus_sources (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
url VARCHAR(500) NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
break;
|
||||
case 'site_settings':
|
||||
await db.query(`
|
||||
CREATE TABLE IF NOT EXISTS site_settings (
|
||||
id INT PRIMARY KEY DEFAULT 1,
|
||||
page_name VARCHAR(255) DEFAULT '数据可视化展示大屏',
|
||||
title VARCHAR(255) DEFAULT '数据可视化展示大屏',
|
||||
logo_url TEXT,
|
||||
default_theme VARCHAR(20) DEFAULT 'dark',
|
||||
show_95_bandwidth TINYINT(1) DEFAULT 0,
|
||||
p95_type VARCHAR(20) DEFAULT 'tx',
|
||||
blackbox_source_id INT,
|
||||
latency_source VARCHAR(100),
|
||||
latency_dest VARCHAR(100),
|
||||
latency_target VARCHAR(255),
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
await db.query(`
|
||||
INSERT IGNORE INTO site_settings (id, page_name, title, default_theme, show_95_bandwidth)
|
||||
VALUES (1, '数据可视化展示大屏', '数据可视化展示大屏', 'dark', 0)
|
||||
`);
|
||||
break;
|
||||
case 'traffic_stats':
|
||||
await db.query(`
|
||||
CREATE TABLE IF NOT EXISTS traffic_stats (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
rx_bytes BIGINT UNSIGNED DEFAULT 0,
|
||||
tx_bytes BIGINT UNSIGNED DEFAULT 0,
|
||||
rx_bandwidth DOUBLE DEFAULT 0,
|
||||
tx_bandwidth DOUBLE DEFAULT 0,
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE INDEX (timestamp)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
break;
|
||||
case 'latency_routes':
|
||||
await db.query(`
|
||||
CREATE TABLE IF NOT EXISTS latency_routes (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
source_id INT NOT NULL,
|
||||
latency_source VARCHAR(100) NOT NULL,
|
||||
latency_dest VARCHAR(100) NOT NULL,
|
||||
latency_target VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
break;
|
||||
case 'server_locations':
|
||||
await db.query(`
|
||||
CREATE TABLE IF NOT EXISTS server_locations (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
ip VARCHAR(255) NOT NULL UNIQUE,
|
||||
country CHAR(2),
|
||||
country_name VARCHAR(100),
|
||||
region VARCHAR(100),
|
||||
city VARCHAR(100),
|
||||
latitude DOUBLE,
|
||||
longitude DOUBLE,
|
||||
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = checkAndFixDatabase;
|
||||
236
server/db-schema-check.js
Normal file
236
server/db-schema-check.js
Normal file
@@ -0,0 +1,236 @@
|
||||
/**
|
||||
* Database schema check
|
||||
* Ensures required tables and columns exist at startup.
|
||||
*/
|
||||
const path = require('path');
|
||||
require('dotenv').config({ path: path.join(__dirname, '..', '.env') });
|
||||
const db = require('./db');
|
||||
const fs = require('fs');
|
||||
|
||||
const SCHEMA = {
|
||||
users: {
|
||||
createSql: `
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
username VARCHAR(255) NOT NULL UNIQUE,
|
||||
password VARCHAR(255) NOT NULL,
|
||||
salt VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`,
|
||||
columns: [
|
||||
{ name: 'username', sql: "ALTER TABLE users ADD COLUMN username VARCHAR(255) NOT NULL UNIQUE AFTER id" },
|
||||
{ name: 'password', sql: "ALTER TABLE users ADD COLUMN password VARCHAR(255) NOT NULL AFTER username" },
|
||||
{ name: 'salt', sql: "ALTER TABLE users ADD COLUMN salt VARCHAR(255) NOT NULL AFTER password" }
|
||||
]
|
||||
},
|
||||
prometheus_sources: {
|
||||
createSql: `
|
||||
CREATE TABLE IF NOT EXISTS prometheus_sources (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
url VARCHAR(500) NOT NULL,
|
||||
description TEXT,
|
||||
is_server_source TINYINT(1) DEFAULT 1,
|
||||
is_overview_source TINYINT(1) DEFAULT 1,
|
||||
is_detail_source TINYINT(1) DEFAULT 1,
|
||||
type VARCHAR(50) DEFAULT 'prometheus',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`,
|
||||
columns: [
|
||||
{ name: 'name', sql: "ALTER TABLE prometheus_sources ADD COLUMN name VARCHAR(255) NOT NULL AFTER id" },
|
||||
{ name: 'url', sql: "ALTER TABLE prometheus_sources ADD COLUMN url VARCHAR(500) NOT NULL AFTER name" },
|
||||
{ name: 'description', sql: "ALTER TABLE prometheus_sources ADD COLUMN description TEXT AFTER url" },
|
||||
{ name: 'is_server_source', sql: "ALTER TABLE prometheus_sources ADD COLUMN is_server_source TINYINT(1) DEFAULT 1 AFTER description" },
|
||||
{ name: 'is_overview_source', sql: "ALTER TABLE prometheus_sources ADD COLUMN is_overview_source TINYINT(1) DEFAULT 1 AFTER is_server_source" },
|
||||
{ name: 'is_detail_source', sql: "ALTER TABLE prometheus_sources ADD COLUMN is_detail_source TINYINT(1) DEFAULT 1 AFTER is_overview_source" },
|
||||
{ name: 'type', sql: "ALTER TABLE prometheus_sources ADD COLUMN type VARCHAR(50) DEFAULT 'prometheus' AFTER is_detail_source" }
|
||||
]
|
||||
},
|
||||
site_settings: {
|
||||
createSql: `
|
||||
CREATE TABLE IF NOT EXISTS site_settings (
|
||||
id INT PRIMARY KEY DEFAULT 1,
|
||||
page_name VARCHAR(255) DEFAULT '数据可视化展示大屏',
|
||||
show_page_name TINYINT(1) DEFAULT 1,
|
||||
title VARCHAR(255) DEFAULT '数据可视化展示大屏',
|
||||
logo_url TEXT,
|
||||
logo_url_dark TEXT,
|
||||
favicon_url TEXT,
|
||||
default_theme VARCHAR(20) DEFAULT 'dark',
|
||||
show_95_bandwidth TINYINT(1) DEFAULT 0,
|
||||
p95_type VARCHAR(20) DEFAULT 'tx',
|
||||
require_login_for_server_details TINYINT(1) DEFAULT 1,
|
||||
blackbox_source_id INT,
|
||||
latency_source VARCHAR(100),
|
||||
latency_dest VARCHAR(100),
|
||||
latency_target VARCHAR(255),
|
||||
icp_filing VARCHAR(255),
|
||||
ps_filing VARCHAR(255),
|
||||
show_server_ip TINYINT(1) DEFAULT 0,
|
||||
ip_metric_name VARCHAR(100) DEFAULT NULL,
|
||||
ip_label_name VARCHAR(100) DEFAULT 'address',
|
||||
custom_metrics JSON DEFAULT NULL,
|
||||
cdn_url VARCHAR(500) DEFAULT NULL,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`,
|
||||
seedSql: `
|
||||
INSERT IGNORE INTO site_settings (
|
||||
id, page_name, show_page_name, title, default_theme, show_95_bandwidth, p95_type, require_login_for_server_details
|
||||
) VALUES (
|
||||
1, '数据可视化展示大屏', 1, '数据可视化展示大屏', 'dark', 0, 'tx', 1
|
||||
)
|
||||
`,
|
||||
columns: [
|
||||
{ name: 'page_name', sql: "ALTER TABLE site_settings ADD COLUMN page_name VARCHAR(255) DEFAULT '数据可视化展示大屏' AFTER id" },
|
||||
{ name: 'show_page_name', sql: "ALTER TABLE site_settings ADD COLUMN show_page_name TINYINT(1) DEFAULT 1 AFTER page_name" },
|
||||
{ name: 'title', sql: "ALTER TABLE site_settings ADD COLUMN title VARCHAR(255) DEFAULT '数据可视化展示大屏' AFTER show_page_name" },
|
||||
{ name: 'logo_url', sql: "ALTER TABLE site_settings ADD COLUMN logo_url TEXT AFTER title" },
|
||||
{ name: 'logo_url_dark', sql: "ALTER TABLE site_settings ADD COLUMN logo_url_dark TEXT AFTER logo_url" },
|
||||
{ name: 'favicon_url', sql: "ALTER TABLE site_settings ADD COLUMN favicon_url TEXT AFTER logo_url_dark" },
|
||||
{ name: 'default_theme', sql: "ALTER TABLE site_settings ADD COLUMN default_theme VARCHAR(20) DEFAULT 'dark' AFTER favicon_url" },
|
||||
{ name: 'show_95_bandwidth', sql: "ALTER TABLE site_settings ADD COLUMN show_95_bandwidth TINYINT(1) DEFAULT 0 AFTER default_theme" },
|
||||
{ name: 'p95_type', sql: "ALTER TABLE site_settings ADD COLUMN p95_type VARCHAR(20) DEFAULT 'tx' AFTER show_95_bandwidth" },
|
||||
{ name: 'require_login_for_server_details', sql: "ALTER TABLE site_settings ADD COLUMN require_login_for_server_details TINYINT(1) DEFAULT 1 AFTER p95_type" },
|
||||
{ name: 'blackbox_source_id', sql: "ALTER TABLE site_settings ADD COLUMN blackbox_source_id INT AFTER require_login_for_server_details" },
|
||||
{ name: 'latency_source', sql: "ALTER TABLE site_settings ADD COLUMN latency_source VARCHAR(100) AFTER blackbox_source_id" },
|
||||
{ name: 'latency_dest', sql: "ALTER TABLE site_settings ADD COLUMN latency_dest VARCHAR(100) AFTER latency_source" },
|
||||
{ name: 'latency_target', sql: "ALTER TABLE site_settings ADD COLUMN latency_target VARCHAR(255) AFTER latency_dest" },
|
||||
{ name: 'icp_filing', sql: "ALTER TABLE site_settings ADD COLUMN icp_filing VARCHAR(255) AFTER latency_target" },
|
||||
{ name: 'ps_filing', sql: "ALTER TABLE site_settings ADD COLUMN ps_filing VARCHAR(255) AFTER icp_filing" },
|
||||
{ name: 'show_server_ip', sql: "ALTER TABLE site_settings ADD COLUMN show_server_ip TINYINT(1) DEFAULT 0 AFTER ps_filing" },
|
||||
{ name: 'ip_metric_name', sql: "ALTER TABLE site_settings ADD COLUMN ip_metric_name VARCHAR(100) DEFAULT NULL AFTER show_server_ip" },
|
||||
{ name: 'ip_label_name', sql: "ALTER TABLE site_settings ADD COLUMN ip_label_name VARCHAR(100) DEFAULT 'address' AFTER ip_metric_name" },
|
||||
{ name: 'custom_metrics', sql: "ALTER TABLE site_settings ADD COLUMN custom_metrics JSON DEFAULT NULL AFTER ip_label_name" },
|
||||
{ name: 'cdn_url', sql: "ALTER TABLE site_settings ADD COLUMN cdn_url VARCHAR(500) DEFAULT NULL AFTER custom_metrics" }
|
||||
]
|
||||
},
|
||||
traffic_stats: {
|
||||
createSql: `
|
||||
CREATE TABLE IF NOT EXISTS traffic_stats (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
rx_bytes BIGINT UNSIGNED DEFAULT 0,
|
||||
tx_bytes BIGINT UNSIGNED DEFAULT 0,
|
||||
rx_bandwidth DOUBLE DEFAULT 0,
|
||||
tx_bandwidth DOUBLE DEFAULT 0,
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE INDEX (timestamp)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`,
|
||||
columns: [
|
||||
{ name: 'rx_bytes', sql: "ALTER TABLE traffic_stats ADD COLUMN rx_bytes BIGINT UNSIGNED DEFAULT 0 AFTER id" },
|
||||
{ name: 'tx_bytes', sql: "ALTER TABLE traffic_stats ADD COLUMN tx_bytes BIGINT UNSIGNED DEFAULT 0 AFTER rx_bytes" },
|
||||
{ name: 'rx_bandwidth', sql: "ALTER TABLE traffic_stats ADD COLUMN rx_bandwidth DOUBLE DEFAULT 0 AFTER tx_bytes" },
|
||||
{ name: 'tx_bandwidth', sql: "ALTER TABLE traffic_stats ADD COLUMN tx_bandwidth DOUBLE DEFAULT 0 AFTER rx_bandwidth" }
|
||||
]
|
||||
},
|
||||
server_locations: {
|
||||
createSql: `
|
||||
CREATE TABLE IF NOT EXISTS server_locations (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
ip VARCHAR(255) NOT NULL UNIQUE,
|
||||
country CHAR(2),
|
||||
country_name VARCHAR(100),
|
||||
region VARCHAR(100),
|
||||
city VARCHAR(100),
|
||||
latitude DOUBLE,
|
||||
longitude DOUBLE,
|
||||
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`,
|
||||
columns: [
|
||||
{ name: 'ip', sql: "ALTER TABLE server_locations ADD COLUMN ip VARCHAR(255) NOT NULL UNIQUE AFTER id" },
|
||||
{ name: 'country', sql: "ALTER TABLE server_locations ADD COLUMN country CHAR(2) AFTER ip" },
|
||||
{ name: 'country_name', sql: "ALTER TABLE server_locations ADD COLUMN country_name VARCHAR(100) AFTER country" },
|
||||
{ name: 'region', sql: "ALTER TABLE server_locations ADD COLUMN region VARCHAR(100) AFTER country_name" },
|
||||
{ name: 'city', sql: "ALTER TABLE server_locations ADD COLUMN city VARCHAR(100) AFTER region" },
|
||||
{ name: 'latitude', sql: "ALTER TABLE server_locations ADD COLUMN latitude DOUBLE AFTER city" },
|
||||
{ name: 'longitude', sql: "ALTER TABLE server_locations ADD COLUMN longitude DOUBLE AFTER latitude" }
|
||||
]
|
||||
},
|
||||
latency_routes: {
|
||||
createSql: `
|
||||
CREATE TABLE IF NOT EXISTS latency_routes (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
source_id INT NOT NULL,
|
||||
latency_source VARCHAR(100) NOT NULL,
|
||||
latency_dest VARCHAR(100) NOT NULL,
|
||||
latency_target VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`,
|
||||
columns: [
|
||||
{ name: 'source_id', sql: "ALTER TABLE latency_routes ADD COLUMN source_id INT NOT NULL AFTER id" },
|
||||
{ name: 'latency_source', sql: "ALTER TABLE latency_routes ADD COLUMN latency_source VARCHAR(100) NOT NULL AFTER source_id" },
|
||||
{ name: 'latency_dest', sql: "ALTER TABLE latency_routes ADD COLUMN latency_dest VARCHAR(100) NOT NULL AFTER latency_source" },
|
||||
{ name: 'latency_target', sql: "ALTER TABLE latency_routes ADD COLUMN latency_target VARCHAR(255) NOT NULL AFTER latency_dest" }
|
||||
]
|
||||
}
|
||||
};
|
||||
|
||||
async function ensureTable(tableName, tableSchema) {
|
||||
try {
|
||||
// 1. Ensure table exists
|
||||
await db.query(tableSchema.createSql);
|
||||
|
||||
// 2. Check columns
|
||||
const [columns] = await db.query(`SHOW COLUMNS FROM \`${tableName}\``);
|
||||
const existingColumns = new Set(columns.map((column) => column.Field));
|
||||
|
||||
console.log(`[Database Integrity] Table '${tableName}' verified (${columns.length} columns)`);
|
||||
|
||||
for (const column of tableSchema.columns || []) {
|
||||
if (!existingColumns.has(column.name)) {
|
||||
console.log(`[Database Integrity] Missing column '${column.name}' in '${tableName}'. Adding it...`);
|
||||
await db.query(column.sql);
|
||||
console.log(`[Database Integrity] Column '${column.name}' added to '${tableName}'.`);
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Seed data
|
||||
if (tableSchema.seedSql) {
|
||||
const [rows] = await db.query(`SELECT count(*) as count FROM \`${tableName}\``);
|
||||
if (rows[0].count === 0) {
|
||||
console.log(`[Database Integrity] Table '${tableName}' is empty. Seeding initial data...`);
|
||||
await db.query(tableSchema.seedSql);
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error(`[Database Integrity] Error ensuring table '${tableName}':`, err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
async function db_migrate() {
|
||||
console.log('[Database Integrity] Starting comprehensive database audit...');
|
||||
|
||||
// Try to check if we can even connect
|
||||
try {
|
||||
const health = await db.checkHealth();
|
||||
if (health.status !== 'up') {
|
||||
console.warn(`[Database Integrity] initial health check failed: ${health.error}`);
|
||||
// If we can't connect, maybe the DB itself doesn't exist?
|
||||
// For now, we rely on the pool to handle connection retries/errors.
|
||||
}
|
||||
} catch (e) {
|
||||
// Ignore health check errors, let ensureTable handle the primary queries
|
||||
}
|
||||
|
||||
try {
|
||||
let tablesChecked = 0;
|
||||
for (const [tableName, tableSchema] of Object.entries(SCHEMA)) {
|
||||
await ensureTable(tableName, tableSchema);
|
||||
tablesChecked++;
|
||||
}
|
||||
console.log(`[Database Integrity] Audit complete. ${tablesChecked} tables verified and healthy.`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.error('[Database Integrity] ❌ Audit failed:', err.message);
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = db_migrate;
|
||||
@@ -10,6 +10,7 @@ const db = require('./db');
|
||||
*/
|
||||
|
||||
const ipInfoToken = process.env.IPINFO_TOKEN;
|
||||
const enableExternalGeoLookup = process.env.ENABLE_EXTERNAL_GEO_LOOKUP === 'true';
|
||||
|
||||
/**
|
||||
* Normalizes geo data for consistent display
|
||||
@@ -17,21 +18,48 @@ const ipInfoToken = process.env.IPINFO_TOKEN;
|
||||
function normalizeGeo(geo) {
|
||||
if (!geo) return geo;
|
||||
|
||||
// Custom normalization for TW, HK, MO to "China, {CODE}"
|
||||
const specialRegions = ['TW'];
|
||||
if (specialRegions.includes(geo.country?.toUpperCase())) {
|
||||
// Custom normalization for TW to "Taipei, China" and JP to "Tokyo"
|
||||
const country = (geo.country || geo.country_code || '').toUpperCase();
|
||||
if (country === 'TW') {
|
||||
return {
|
||||
...geo,
|
||||
city: `China, ${geo.country.toUpperCase()}`,
|
||||
country_name: 'China'
|
||||
city: 'Taipei',
|
||||
country: 'TW',
|
||||
country_name: 'China',
|
||||
// Force Taipei coordinates for consistent 2D plotting
|
||||
loc: '25.0330,121.5654',
|
||||
latitude: 25.0330,
|
||||
longitude: 121.5654
|
||||
};
|
||||
} else if (country === 'JP') {
|
||||
return {
|
||||
...geo,
|
||||
city: 'Tokyo',
|
||||
country: 'JP',
|
||||
country_name: 'Japan',
|
||||
// Force Tokyo coordinates for consistent 2D plotting
|
||||
loc: '35.6895,139.6917',
|
||||
latitude: 35.6895,
|
||||
longitude: 139.6917
|
||||
};
|
||||
}
|
||||
return geo;
|
||||
}
|
||||
|
||||
async function getLocation(target) {
|
||||
// Normalize target (strip port if present)
|
||||
const cleanTarget = target.split(':')[0];
|
||||
// Normalize target (strip port if present, handle IPv6 brackets)
|
||||
let cleanTarget = target;
|
||||
if (cleanTarget.startsWith('[')) {
|
||||
const closingBracket = cleanTarget.indexOf(']');
|
||||
if (closingBracket !== -1) {
|
||||
cleanTarget = cleanTarget.substring(1, closingBracket);
|
||||
}
|
||||
} else {
|
||||
const parts = cleanTarget.split(':');
|
||||
if (parts.length === 2) {
|
||||
cleanTarget = parts[0];
|
||||
}
|
||||
}
|
||||
|
||||
// 1. Check if we already have this IP/Domain in DB (FASTEST)
|
||||
try {
|
||||
@@ -57,7 +85,18 @@ async function getLocation(target) {
|
||||
// Secondary DB check with resolved IP
|
||||
const [rows] = await db.query('SELECT * FROM server_locations WHERE ip = ?', [cleanIp]);
|
||||
if (rows.length > 0) {
|
||||
return normalizeGeo(rows[0]);
|
||||
const data = rows[0];
|
||||
// Cache the domain mapping to avoid future DNS lookups
|
||||
if (cleanTarget !== cleanIp) {
|
||||
try {
|
||||
await db.query(`
|
||||
INSERT INTO server_locations (ip, country, country_name, region, city, latitude, longitude)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
ON DUPLICATE KEY UPDATE last_updated = CURRENT_TIMESTAMP
|
||||
`, [cleanTarget, data.country, data.country_name, data.region, data.city, data.latitude, data.longitude]);
|
||||
} catch(e) {}
|
||||
}
|
||||
return normalizeGeo(data);
|
||||
}
|
||||
} catch (err) {
|
||||
// Quiet DNS failure for tokens (legacy bug mitigation)
|
||||
@@ -74,6 +113,10 @@ async function getLocation(target) {
|
||||
}
|
||||
|
||||
// 4. Resolve via ipinfo.io (LAST RESORT)
|
||||
if (!enableExternalGeoLookup) {
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
console.log(`[Geo Service] API lookup (ipinfo.io) for: ${cleanIp}`);
|
||||
const url = `https://ipinfo.io/${cleanIp}/json${ipInfoToken ? `?token=${ipInfoToken}` : ''}`;
|
||||
@@ -113,6 +156,29 @@ async function getLocation(target) {
|
||||
locationData.longitude
|
||||
]);
|
||||
|
||||
// Cache the domain target as well if it differs from the resolved IP
|
||||
if (cleanTarget !== cleanIp) {
|
||||
await db.query(`
|
||||
INSERT INTO server_locations (ip, country, country_name, region, city, latitude, longitude)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?)
|
||||
ON DUPLICATE KEY UPDATE
|
||||
country = VALUES(country),
|
||||
country_name = VALUES(country_name),
|
||||
region = VALUES(region),
|
||||
city = VALUES(city),
|
||||
latitude = VALUES(latitude),
|
||||
longitude = VALUES(longitude)
|
||||
`, [
|
||||
cleanTarget,
|
||||
locationData.country,
|
||||
locationData.country_name,
|
||||
locationData.region,
|
||||
locationData.city,
|
||||
locationData.latitude,
|
||||
locationData.longitude
|
||||
]);
|
||||
}
|
||||
|
||||
return locationData;
|
||||
}
|
||||
} catch (err) {
|
||||
|
||||
823
server/index.js
823
server/index.js
File diff suppressed because it is too large
Load Diff
@@ -1,90 +1,40 @@
|
||||
/**
|
||||
* Database Initialization Script
|
||||
* Run: npm run init-db
|
||||
* Creates the required MySQL database and tables.
|
||||
*/
|
||||
require('dotenv').config();
|
||||
const path = require('path');
|
||||
require('dotenv').config({ path: path.join(__dirname, '..', '.env') });
|
||||
const mysql = require('mysql2/promise');
|
||||
const db_migrate = require('./db-schema-check');
|
||||
const db = require('./db');
|
||||
|
||||
async function initDatabase() {
|
||||
const connection = await mysql.createConnection({
|
||||
host: process.env.MYSQL_HOST || 'localhost',
|
||||
port: parseInt(process.env.MYSQL_PORT) || 3306,
|
||||
user: process.env.MYSQL_USER || 'root',
|
||||
password: process.env.MYSQL_PASSWORD || ''
|
||||
});
|
||||
|
||||
const host = process.env.MYSQL_HOST || 'localhost';
|
||||
const port = parseInt(process.env.MYSQL_PORT) || 3306;
|
||||
const user = process.env.MYSQL_USER || 'root';
|
||||
const password = process.env.MYSQL_PASSWORD || '';
|
||||
const dbName = process.env.MYSQL_DATABASE || 'display_wall';
|
||||
|
||||
console.log('🔧 Initializing database...\n');
|
||||
// 1. Create connection without database selected to create the DB itself
|
||||
const connection = await mysql.createConnection({
|
||||
host,
|
||||
port,
|
||||
user,
|
||||
password
|
||||
});
|
||||
|
||||
console.log('🔧 Initializing database environment...\n');
|
||||
|
||||
// Create database
|
||||
await connection.query(`CREATE DATABASE IF NOT EXISTS \`${dbName}\` CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci`);
|
||||
console.log(` ✅ Database "${dbName}" ready`);
|
||||
await connection.end();
|
||||
|
||||
await connection.query(`USE \`${dbName}\``);
|
||||
// 2. Re-initialize the standard pool so it can see the new DB
|
||||
db.initPool();
|
||||
|
||||
// Create users table
|
||||
await connection.query(`
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
username VARCHAR(255) NOT NULL UNIQUE,
|
||||
password VARCHAR(255) NOT NULL,
|
||||
salt VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
console.log(' ✅ Table "users" ready');
|
||||
|
||||
// Create prometheus_sources table
|
||||
await connection.query(`
|
||||
CREATE TABLE IF NOT EXISTS prometheus_sources (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
url VARCHAR(500) NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
console.log(' ✅ Table "prometheus_sources" ready');
|
||||
|
||||
// Create site_settings table
|
||||
await connection.query(`
|
||||
CREATE TABLE IF NOT EXISTS site_settings (
|
||||
id INT PRIMARY KEY DEFAULT 1,
|
||||
page_name VARCHAR(255) DEFAULT '数据可视化展示大屏',
|
||||
title VARCHAR(255) DEFAULT '数据可视化展示大屏',
|
||||
logo_url TEXT,
|
||||
default_theme VARCHAR(20) DEFAULT 'dark',
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
// Insert default settings if not exists
|
||||
await connection.query(`
|
||||
INSERT IGNORE INTO site_settings (id, page_name, title, default_theme)
|
||||
VALUES (1, '数据可视化展示大屏', '数据可视化展示大屏', 'dark')
|
||||
`);
|
||||
console.log(' ✅ Table "site_settings" ready');
|
||||
|
||||
// Create server_locations table
|
||||
await connection.query(`
|
||||
CREATE TABLE IF NOT EXISTS server_locations (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
ip VARCHAR(255) NOT NULL UNIQUE,
|
||||
country CHAR(2),
|
||||
country_name VARCHAR(100),
|
||||
region VARCHAR(100),
|
||||
city VARCHAR(100),
|
||||
latitude DOUBLE,
|
||||
longitude DOUBLE,
|
||||
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
|
||||
`);
|
||||
console.log(' ✅ Table "server_locations" ready');
|
||||
// 3. Use the centralized schema tool to create/fix all tables
|
||||
console.log(' 📦 Initializing tables using schema-check tool...');
|
||||
await db_migrate();
|
||||
console.log(' ✅ Tables and columns ready');
|
||||
|
||||
console.log('\n🎉 Database initialization complete!\n');
|
||||
await connection.end();
|
||||
}
|
||||
|
||||
initDatabase().catch(err => {
|
||||
|
||||
@@ -1,22 +1,43 @@
|
||||
const axios = require('axios');
|
||||
const http = require('http');
|
||||
const https = require('https');
|
||||
const cache = require('./cache'); // <-- ADD
|
||||
|
||||
const QUERY_TIMEOUT = 10000;
|
||||
|
||||
// Reusable agents to handle potential redirect issues and protocol mismatches
|
||||
const crypto = require('crypto');
|
||||
const httpAgent = new http.Agent({ keepAlive: true });
|
||||
const httpsAgent = new https.Agent({ keepAlive: true, rejectUnauthorized: false });
|
||||
const httpsAgent = new https.Agent({ keepAlive: true });
|
||||
|
||||
const serverIdMap = new Map(); // token -> { instance, job, source }
|
||||
const SECRET = process.env.APP_SECRET || 'prom-data-panel-stable-secret-key-123';
|
||||
const serverIdMap = new Map(); // token -> { instance, job, source, lastSeen }
|
||||
|
||||
function getSecret() {
|
||||
// Use the env variable populated by index.js initialization
|
||||
return process.env.APP_SECRET || 'fallback-secret-for-safety';
|
||||
}
|
||||
|
||||
// Periodic cleanup of serverIdMap to prevent infinite growth
|
||||
setInterval(() => {
|
||||
const now = Date.now();
|
||||
const TTL = 24 * 60 * 60 * 1000; // 24 hours
|
||||
for (const [token, data] of serverIdMap.entries()) {
|
||||
if (now - (data.lastSeen || 0) > TTL) {
|
||||
serverIdMap.delete(token);
|
||||
}
|
||||
}
|
||||
}, 3600000); // Once per hour
|
||||
|
||||
function getServerToken(instance, job, source) {
|
||||
const hash = crypto.createHmac('sha256', SECRET)
|
||||
const hash = crypto.createHmac('sha256', getSecret())
|
||||
.update(`${instance}:${job}:${source}`)
|
||||
.digest('hex')
|
||||
.substring(0, 16);
|
||||
|
||||
// Update lastSeen timestamp
|
||||
const data = serverIdMap.get(hash);
|
||||
if (data) data.lastSeen = Date.now();
|
||||
|
||||
return hash;
|
||||
}
|
||||
|
||||
@@ -48,12 +69,12 @@ function createClient(baseUrl) {
|
||||
/**
|
||||
* Test Prometheus connection
|
||||
*/
|
||||
async function testConnection(url) {
|
||||
async function testConnection(url, customTimeout = null) {
|
||||
const normalized = normalizeUrl(url);
|
||||
try {
|
||||
// Using native fetch to avoid follow-redirects/axios "protocol mismatch" issues in some Node environments
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), QUERY_TIMEOUT);
|
||||
const timer = setTimeout(() => controller.abort(), customTimeout || QUERY_TIMEOUT);
|
||||
|
||||
// Node native fetch - handles http/https automatically
|
||||
const res = await fetch(`${normalized}/api/v1/status/buildinfo`, {
|
||||
@@ -188,7 +209,11 @@ async function getOverviewMetrics(url, sourceName) {
|
||||
diskFreeResult,
|
||||
netRxResult,
|
||||
netTxResult,
|
||||
targetsResult
|
||||
netRx24hResult,
|
||||
netTx24hResult,
|
||||
targetsResult,
|
||||
conntrackEntriesResult,
|
||||
conntrackLimitResult
|
||||
] = await Promise.all([
|
||||
// CPU usage per instance: 1 - avg idle
|
||||
query(url, '100 - (avg by (instance, job) (rate(node_cpu_seconds_total{mode="idle"}[1m])) * 100)').catch(() => []),
|
||||
@@ -206,8 +231,16 @@ async function getOverviewMetrics(url, sourceName) {
|
||||
query(url, 'sum by (instance, job) (rate(node_network_receive_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[1m]))').catch(() => []),
|
||||
// Network transmit rate (bytes/sec)
|
||||
query(url, 'sum by (instance, job) (rate(node_network_transmit_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[1m]))').catch(() => []),
|
||||
// 24h Network receive total (bytes)
|
||||
query(url, 'sum by (instance, job) (increase(node_network_receive_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[24h]))').catch(() => []),
|
||||
// 24h Network transmit total (bytes)
|
||||
query(url, 'sum by (instance, job) (increase(node_network_transmit_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[24h]))').catch(() => []),
|
||||
// Targets status from /api/v1/targets
|
||||
getTargets(url).catch(() => [])
|
||||
getTargets(url).catch(() => []),
|
||||
// Conntrack entries
|
||||
query(url, 'node_nf_conntrack_entries').catch(() => []),
|
||||
// Conntrack limits
|
||||
query(url, 'node_nf_conntrack_entries_limit').catch(() => [])
|
||||
]);
|
||||
|
||||
// Fetch 24h detailed traffic using the A*duration logic
|
||||
@@ -222,7 +255,10 @@ async function getOverviewMetrics(url, sourceName) {
|
||||
const token = getServerToken(originalInstance, job, sourceName);
|
||||
|
||||
// Store mapping for detail queries
|
||||
serverIdMap.set(token, { instance: originalInstance, source: sourceName, job });
|
||||
serverIdMap.set(token, { instance: originalInstance, source: sourceName, job, lastSeen: Date.now() });
|
||||
|
||||
// Also store in Valkey for resilience across restarts
|
||||
cache.set(`server_token:${token}`, originalInstance, 86400).catch(()=>{});
|
||||
|
||||
if (!instances.has(token)) {
|
||||
instances.set(token, {
|
||||
@@ -238,9 +274,14 @@ async function getOverviewMetrics(url, sourceName) {
|
||||
diskUsed: 0,
|
||||
netRx: 0,
|
||||
netTx: 0,
|
||||
traffic24hRx: 0,
|
||||
traffic24hTx: 0,
|
||||
conntrackEntries: 0,
|
||||
conntrackLimit: 0,
|
||||
up: false,
|
||||
memPercent: 0,
|
||||
diskPercent: 0
|
||||
diskPercent: 0,
|
||||
conntrackPercent: 0
|
||||
});
|
||||
}
|
||||
const inst = instances.get(token);
|
||||
@@ -306,6 +347,26 @@ async function getOverviewMetrics(url, sourceName) {
|
||||
inst.netTx = parseFloat(r.value[1]) || 0;
|
||||
}
|
||||
|
||||
// Parse 24h traffic
|
||||
for (const r of netRx24hResult) {
|
||||
const inst = getOrCreate(r.metric);
|
||||
inst.traffic24hRx = parseFloat(r.value[1]) || 0;
|
||||
}
|
||||
for (const r of netTx24hResult) {
|
||||
const inst = getOrCreate(r.metric);
|
||||
inst.traffic24hTx = parseFloat(r.value[1]) || 0;
|
||||
}
|
||||
|
||||
// Parse conntrack
|
||||
for (const r of conntrackEntriesResult) {
|
||||
const inst = getOrCreate(r.metric);
|
||||
inst.conntrackEntries = parseFloat(r.value[1]) || 0;
|
||||
}
|
||||
for (const r of conntrackLimitResult) {
|
||||
const inst = getOrCreate(r.metric);
|
||||
inst.conntrackLimit = parseFloat(r.value[1]) || 0;
|
||||
}
|
||||
|
||||
for (const inst of instances.values()) {
|
||||
if (!inst.up && (inst.cpuPercent > 0 || inst.memTotal > 0)) {
|
||||
inst.up = true;
|
||||
@@ -313,6 +374,7 @@ async function getOverviewMetrics(url, sourceName) {
|
||||
// Calculate percentages on backend
|
||||
inst.memPercent = inst.memTotal > 0 ? (inst.memUsed / inst.memTotal * 100) : 0;
|
||||
inst.diskPercent = inst.diskTotal > 0 ? (inst.diskUsed / inst.diskTotal * 100) : 0;
|
||||
inst.conntrackPercent = inst.conntrackLimit > 0 ? (inst.conntrackEntries / inst.conntrackLimit * 100) : 0;
|
||||
}
|
||||
|
||||
const allInstancesList = Array.from(instances.values());
|
||||
@@ -391,25 +453,23 @@ function calculateTrafficFromHistory(values) {
|
||||
}
|
||||
|
||||
/**
|
||||
* Get total traffic for the past 24h by fetching all points and integrating
|
||||
* Get total traffic for the past 24h using Prometheus increase() for stability and accuracy
|
||||
*/
|
||||
async function get24hTrafficSum(url) {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const start = now - 86400;
|
||||
const step = 60; // 1-minute points for calculation
|
||||
|
||||
try {
|
||||
const [rxResult, txResult] = await Promise.all([
|
||||
queryRange(url, 'sum(rate(node_network_receive_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[1m]))', start, now, step).catch(() => []),
|
||||
queryRange(url, 'sum(rate(node_network_transmit_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[1m]))', start, now, step).catch(() => [])
|
||||
query(url, 'sum(increase(node_network_receive_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[24h]))').catch(() => []),
|
||||
query(url, 'sum(increase(node_network_transmit_bytes_total{device!~"lo|veth.*|docker.*|br-.*"}[24h]))').catch(() => [])
|
||||
]);
|
||||
|
||||
const rxValues = rxResult.length > 0 ? rxResult[0].values : [];
|
||||
const txValues = txResult.length > 0 ? txResult[0].values : [];
|
||||
const rx = rxResult.length > 0 ? parseFloat(rxResult[0].value[1]) : 0;
|
||||
const tx = txResult.length > 0 ? parseFloat(txResult[0].value[1]) : 0;
|
||||
|
||||
return {
|
||||
rx: calculateTrafficFromHistory(rxValues),
|
||||
tx: calculateTrafficFromHistory(txValues)
|
||||
};
|
||||
return { rx, tx };
|
||||
} catch (err) {
|
||||
console.error(`[Prometheus] get24hTrafficSum error:`, err.message);
|
||||
return { rx: 0, tx: 0 };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -417,34 +477,28 @@ async function get24hTrafficSum(url) {
|
||||
*/
|
||||
async function get24hServerTrafficSum(url, instance, job) {
|
||||
const node = resolveToken(instance);
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const start = now - 86400;
|
||||
const step = 60;
|
||||
|
||||
const rxExpr = `sum(rate(node_network_receive_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[1m]))`;
|
||||
const txExpr = `sum(rate(node_network_transmit_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[1m]))`;
|
||||
const rxExpr = `sum(increase(node_network_receive_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[24h]))`;
|
||||
const txExpr = `sum(increase(node_network_transmit_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[24h]))`;
|
||||
|
||||
const [rxResult, txResult] = await Promise.all([
|
||||
queryRange(url, rxExpr, start, now, step).catch(() => []),
|
||||
queryRange(url, txExpr, start, now, step).catch(() => [])
|
||||
query(url, rxExpr).catch(() => []),
|
||||
query(url, txExpr).catch(() => [])
|
||||
]);
|
||||
|
||||
const rxValues = rxResult.length > 0 ? rxResult[0].values : [];
|
||||
const txValues = txResult.length > 0 ? txResult[0].values : [];
|
||||
const rx = rxResult.length > 0 ? parseFloat(rxResult[0].value[1]) : 0;
|
||||
const tx = txResult.length > 0 ? parseFloat(txResult[0].value[1]) : 0;
|
||||
|
||||
return {
|
||||
rx: calculateTrafficFromHistory(rxValues),
|
||||
tx: calculateTrafficFromHistory(txValues)
|
||||
};
|
||||
return { rx, tx };
|
||||
}
|
||||
|
||||
/**
|
||||
* Get network traffic history (past 24h, 5-min intervals for chart)
|
||||
*/
|
||||
async function getNetworkHistory(url) {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const start = now - 86400; // 24h ago
|
||||
const step = 300; // 5 minutes for better resolution on chart
|
||||
const now = Math.floor(Date.now() / 1000 / step) * step; // Sync to step boundary
|
||||
const start = now - 86400; // 24h ago
|
||||
|
||||
const [rxResult, txResult] = await Promise.all([
|
||||
queryRange(url,
|
||||
@@ -496,9 +550,9 @@ function mergeNetworkHistories(histories) {
|
||||
* Get CPU usage history (past 1h, 1-min intervals)
|
||||
*/
|
||||
async function getCpuHistory(url) {
|
||||
const now = Math.floor(Date.now() / 1000);
|
||||
const start = now - 3600; // 1h ago
|
||||
const step = 60; // 1 minute
|
||||
const now = Math.floor(Date.now() / 1000 / step) * step; // Sync to step boundary
|
||||
const start = now - 3600; // 1h ago
|
||||
|
||||
const result = await queryRange(url,
|
||||
'100 - (avg(rate(node_cpu_seconds_total{mode="idle"}[1m])) * 100)',
|
||||
@@ -533,30 +587,27 @@ function mergeCpuHistories(histories) {
|
||||
}
|
||||
|
||||
|
||||
function resolveToken(token) {
|
||||
async function resolveToken(token) {
|
||||
if (serverIdMap.has(token)) {
|
||||
return serverIdMap.get(token).instance;
|
||||
}
|
||||
const cachedInstance = await cache.get(`server_token:${token}`);
|
||||
if (cachedInstance) return cachedInstance;
|
||||
|
||||
return token;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get detailed metrics for a specific server (node)
|
||||
*/
|
||||
async function getServerDetails(baseUrl, instance, job) {
|
||||
async function getServerDetails(baseUrl, instance, job, settings = {}) {
|
||||
const url = normalizeUrl(baseUrl);
|
||||
const node = resolveToken(instance);
|
||||
const node = await resolveToken(instance);
|
||||
|
||||
// Queries based on the requested dashboard structure
|
||||
const queries = {
|
||||
// Split CPU
|
||||
cpuSystem: `avg(rate(node_cpu_seconds_total{mode="system", instance="${node}"}[1m])) * 100`,
|
||||
cpuUser: `avg(rate(node_cpu_seconds_total{mode="user", instance="${node}"}[1m])) * 100`,
|
||||
cpuIowait: `avg(rate(node_cpu_seconds_total{mode="iowait", instance="${node}"}[1m])) * 100`,
|
||||
cpuIrq: `avg(rate(node_cpu_seconds_total{mode=~"irq|softirq", instance="${node}"}[1m])) * 100`,
|
||||
cpuOther: `avg(rate(node_cpu_seconds_total{mode=~"nice|steal|guest|guest_nice", instance="${node}"}[1m])) * 100`,
|
||||
cpuIdle: `avg(rate(node_cpu_seconds_total{mode="idle", instance="${node}"}[1m])) * 100`,
|
||||
|
||||
cpuBusy: `100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="${node}"}[1m])))`,
|
||||
sysLoad: `node_load1{instance="${node}",job="${job}"} * 100 / count(count(node_cpu_seconds_total{instance="${node}",job="${job}"}) by (cpu))`,
|
||||
memUsedPct: `(1 - (node_memory_MemAvailable_bytes{instance="${node}", job="${job}"} / node_memory_MemTotal_bytes{instance="${node}", job="${job}"})) * 100`,
|
||||
@@ -564,11 +615,16 @@ async function getServerDetails(baseUrl, instance, job) {
|
||||
rootFsUsedPct: `100 - ((node_filesystem_avail_bytes{instance="${node}",job="${job}",mountpoint="/",fstype!~"rootfs|tmpfs"} * 100) / node_filesystem_size_bytes{instance="${node}",job="${job}",mountpoint="/",fstype!~"rootfs|tmpfs"})`,
|
||||
cpuCores: `count(count(node_cpu_seconds_total{instance="${node}",job="${job}"}) by (cpu))`,
|
||||
memTotal: `node_memory_MemTotal_bytes{instance="${node}",job="${job}"}`,
|
||||
swapTotal: `node_memory_SwapTotal_bytes{instance="${node}",job="${job}"}`,
|
||||
rootFsTotal: `node_filesystem_size_bytes{instance="${node}",job="${job}",mountpoint="/",fstype!~"rootfs|tmpfs"}`,
|
||||
uptime: `node_time_seconds{instance="${node}",job="${job}"} - node_boot_time_seconds{instance="${node}",job="${job}"}`,
|
||||
netRx: `sum(rate(node_network_receive_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[1m]))`,
|
||||
netTx: `sum(rate(node_network_transmit_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[1m]))`,
|
||||
sockstatTcp: `node_sockstat_TCP_inuse{instance="${node}",job="${job}"}`,
|
||||
sockstatTcpMem: `node_sockstat_TCP_mem{instance="${node}",job="${job}"} * 4096`,
|
||||
conntrackEntries: `node_nf_conntrack_entries{instance="${node}",job="${job}"}`,
|
||||
conntrackLimit: `node_nf_conntrack_entries_limit{instance="${node}",job="${job}"}`,
|
||||
conntrackUsedPct: `(node_nf_conntrack_entries{instance="${node}",job="${job}"} / node_nf_conntrack_entries_limit{instance="${node}",job="${job}"}) * 100`,
|
||||
// Get individual partitions (excluding virtual and FUSE mounts)
|
||||
partitions_size: `node_filesystem_size_bytes{instance="${node}", job="${job}", fstype!~"tmpfs|autofs|proc|sysfs|fuse.*", mountpoint!~"/tmp.*|/var/lib/docker/.*|/run/.*"}`,
|
||||
partitions_free: `node_filesystem_free_bytes{instance="${node}", job="${job}", fstype!~"tmpfs|autofs|proc|sysfs|fuse.*", mountpoint!~"/tmp.*|/var/lib/docker/.*|/run/.*"}`
|
||||
@@ -594,6 +650,85 @@ async function getServerDetails(baseUrl, instance, job) {
|
||||
|
||||
await Promise.all(queryPromises);
|
||||
|
||||
// Process custom metrics from settings
|
||||
results.custom_data = [];
|
||||
try {
|
||||
const customMetrics = typeof settings.custom_metrics === 'string'
|
||||
? JSON.parse(settings.custom_metrics)
|
||||
: (settings.custom_metrics || []);
|
||||
|
||||
if (Array.isArray(customMetrics) && customMetrics.length > 0) {
|
||||
const customPromises = customMetrics.map(async (cfg) => {
|
||||
if (!cfg.metric) return null;
|
||||
try {
|
||||
const expr = `${cfg.metric}{instance="${node}",job="${job}"}`;
|
||||
const res = await query(url, expr);
|
||||
if (res && res.length > 0) {
|
||||
const val = res[0].metric[cfg.label || 'address'] || res[0].value[1];
|
||||
|
||||
// If this metric is marked as an IP source, update the main IP fields
|
||||
if (cfg.is_ip && !results.ipv4?.length && !results.ipv6?.length) {
|
||||
if (val.includes(':')) {
|
||||
results.ipv6 = [val];
|
||||
results.ipv4 = [];
|
||||
} else {
|
||||
results.ipv4 = [val];
|
||||
results.ipv6 = [];
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
name: cfg.name || cfg.metric,
|
||||
value: val
|
||||
};
|
||||
}
|
||||
} catch (e) {
|
||||
console.error(`[Prometheus] Custom metric error (${cfg.metric}):`, e.message);
|
||||
}
|
||||
return null;
|
||||
});
|
||||
|
||||
const customResults = await Promise.all(customPromises);
|
||||
results.custom_data = customResults.filter(r => r !== null);
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('[Prometheus] Error processing custom metrics:', err.message);
|
||||
}
|
||||
|
||||
// Ensure IP discovery fallback if no custom IP metric found
|
||||
if ((!results.ipv4 || results.ipv4.length === 0) && (!results.ipv6 || results.ipv6.length === 0)) {
|
||||
try {
|
||||
const targets = await getTargets(baseUrl);
|
||||
const matchedTarget = targets.find(t => t.labels && t.labels.instance === node && t.labels.job === job);
|
||||
if (matchedTarget) {
|
||||
const scrapeUrl = matchedTarget.scrapeUrl || '';
|
||||
try {
|
||||
const urlObj = new URL(scrapeUrl);
|
||||
const host = urlObj.hostname;
|
||||
if (host.includes(':')) {
|
||||
results.ipv6 = [host];
|
||||
results.ipv4 = [];
|
||||
} else {
|
||||
results.ipv4 = [host];
|
||||
results.ipv6 = [];
|
||||
}
|
||||
} catch (e) {
|
||||
const host = scrapeUrl.split('//').pop().split('/')[0].split(':')[0];
|
||||
if (host) {
|
||||
results.ipv4 = [host];
|
||||
results.ipv6 = [];
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
console.error(`[Prometheus] Target fallback error for ${node}:`, e.message);
|
||||
}
|
||||
}
|
||||
|
||||
// Final sanitization
|
||||
results.ipv4 = results.ipv4 || [];
|
||||
results.ipv6 = results.ipv6 || [];
|
||||
|
||||
// Group partitions
|
||||
const partitionsMap = {};
|
||||
(results.partitions_size || []).forEach(p => {
|
||||
@@ -632,46 +767,22 @@ async function getServerDetails(baseUrl, instance, job) {
|
||||
/**
|
||||
* Get historical metrics for a specific server (node)
|
||||
*/
|
||||
async function getServerHistory(baseUrl, instance, job, metric, range = '1h', start = null, end = null) {
|
||||
async function getServerHistory(baseUrl, instance, job, metric, range = '1h', start = null, end = null, p95Type = 'tx') {
|
||||
const url = normalizeUrl(baseUrl);
|
||||
const node = resolveToken(instance);
|
||||
const node = await resolveToken(instance);
|
||||
|
||||
// Custom multi-metric handler for CPU Busy
|
||||
// CPU Busy history: 100 - idle
|
||||
if (metric === 'cpuBusy') {
|
||||
const modes = {
|
||||
system: 'system',
|
||||
user: 'user',
|
||||
iowait: 'iowait',
|
||||
irq: 'irq|softirq',
|
||||
other: 'nice|steal|guest|guest_nice',
|
||||
idle: 'idle'
|
||||
};
|
||||
|
||||
const expr = `100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="${node}"}[1m])))`;
|
||||
const rangeObj = parseRange(range, start, end);
|
||||
const timestamps = [];
|
||||
const series = {};
|
||||
Object.keys(modes).forEach(m => series[m] = []);
|
||||
const result = await queryRange(url, expr, rangeObj.queryStart, rangeObj.queryEnd, rangeObj.step);
|
||||
|
||||
const results = await Promise.all(Object.entries(modes).map(async ([name, mode]) => {
|
||||
const expr = `avg(rate(node_cpu_seconds_total{mode=~"${mode}", instance="${node}"}[1m])) * 100`;
|
||||
const res = await queryRange(url, expr, rangeObj.queryStart, rangeObj.queryEnd, rangeObj.step);
|
||||
return { name, values: res.length > 0 ? res[0].values : [] };
|
||||
}));
|
||||
if (!result || result.length === 0) return { timestamps: [], values: [] };
|
||||
|
||||
if (results[0].values.length === 0) return { timestamps: [], series: {} };
|
||||
|
||||
// Use first result for timestamps
|
||||
results[0].values.forEach(v => timestamps.push(v[0] * 1000));
|
||||
|
||||
results.forEach(r => {
|
||||
r.values.forEach(v => series[r.name].push(parseFloat(v[1])));
|
||||
});
|
||||
|
||||
// Pre-calculate busy percentage: 100 - idle
|
||||
const idleValues = series.idle || [];
|
||||
const busyValues = idleValues.map(idleVal => Math.max(0, 100 - idleVal));
|
||||
|
||||
return { timestamps, series, values: busyValues };
|
||||
return {
|
||||
timestamps: result[0].values.map(v => v[0] * 1000),
|
||||
values: result[0].values.map(v => parseFloat(v[1]))
|
||||
};
|
||||
}
|
||||
|
||||
// Map metric keys to Prometheus expressions
|
||||
@@ -683,7 +794,8 @@ async function getServerHistory(baseUrl, instance, job, metric, range = '1h', st
|
||||
netRx: `sum(rate(node_network_receive_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[1m]))`,
|
||||
netTx: `sum(rate(node_network_transmit_bytes_total{instance="${node}",job="${job}",device!~'tap.*|veth.*|br.*|docker.*|virbr*|podman.*|lo.*|vmbr.*|fwbr.|ip.*|gre.*|virbr.*|vnet.*'}[1m]))`,
|
||||
sockstatTcp: `node_sockstat_TCP_inuse{instance="${node}",job="${job}"}`,
|
||||
sockstatTcpMem: `node_sockstat_TCP_mem{instance="${node}",job="${job}"} * 4096`
|
||||
sockstatTcpMem: `node_sockstat_TCP_mem{instance="${node}",job="${job}"} * 4096`,
|
||||
conntrackUsedPct: `(node_nf_conntrack_entries{instance="${node}",job="${job}"} / node_nf_conntrack_entries_limit{instance="${node}",job="${job}"}) * 100`
|
||||
};
|
||||
|
||||
const rangeObj = parseRange(range, start, end);
|
||||
@@ -711,9 +823,22 @@ async function getServerHistory(baseUrl, instance, job, metric, range = '1h', st
|
||||
txTotal += (tx[i] || 0) * duration;
|
||||
}
|
||||
|
||||
const sortedTx = [...tx].sort((a, b) => a - b);
|
||||
const p95Idx = Math.floor(sortedTx.length * 0.95);
|
||||
const p95 = sortedTx.length > 0 ? sortedTx[p95Idx] : 0;
|
||||
// Calculate P95 based on p95Type
|
||||
let combined = [];
|
||||
if (p95Type === 'rx') {
|
||||
combined = [...rx];
|
||||
} else if (p95Type === 'both') {
|
||||
combined = tx.map((t, i) => (t || 0) + (rx[i] || 0));
|
||||
} else if (p95Type === 'max') {
|
||||
combined = tx.map((t, i) => Math.max(t || 0, rx[i] || 0));
|
||||
} else {
|
||||
// Default to tx
|
||||
combined = [...tx];
|
||||
}
|
||||
|
||||
const sorted = combined.sort((a, b) => a - b);
|
||||
const p95Idx = Math.floor(sorted.length * 0.95);
|
||||
const p95 = sorted.length > 0 ? sorted[p95Idx] : 0;
|
||||
|
||||
return {
|
||||
timestamps,
|
||||
@@ -804,10 +929,8 @@ module.exports = {
|
||||
getLatency: async (blackboxUrl, target) => {
|
||||
if (!blackboxUrl || !target) return null;
|
||||
try {
|
||||
const normalized = blackboxUrl.trim().replace(/\/+$/, '');
|
||||
const normalized = normalizeUrl(blackboxUrl);
|
||||
|
||||
// Construct a single optimized query searching for priority metrics and common labels
|
||||
// Prioritize probe_icmp_duration_seconds OVER probe_duration_seconds
|
||||
const queryExpr = `(
|
||||
probe_icmp_duration_seconds{phase="rtt", instance="${target}"} or
|
||||
probe_icmp_duration_seconds{phase="rtt", target="${target}"} or
|
||||
@@ -819,14 +942,9 @@ module.exports = {
|
||||
probe_duration_seconds{target="${target}"}
|
||||
)`;
|
||||
|
||||
const params = new URLSearchParams({ query: queryExpr });
|
||||
const res = await fetch(`${normalized}/api/v1/query?${params.toString()}`);
|
||||
|
||||
if (res.ok) {
|
||||
const data = await res.json();
|
||||
if (data.status === 'success' && data.data.result.length > 0) {
|
||||
return parseFloat(data.data.result[0].value[1]) * 1000;
|
||||
}
|
||||
const result = await query(normalized, queryExpr);
|
||||
if (result && result.length > 0) {
|
||||
return parseFloat(result[0].value[1]) * 1000;
|
||||
}
|
||||
return null;
|
||||
} catch (err) {
|
||||
|
||||
187
update.sh
Normal file
187
update.sh
Normal file
@@ -0,0 +1,187 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SERVICE_NAME="promdatapanel"
|
||||
DEFAULT_APP_DIR="/opt/promdata-panel"
|
||||
ZIP_URL="https://git.littlediary.cn/CN-JS-HuiBai/PromdataPanel/archive/main.zip"
|
||||
|
||||
GREEN='\033[0;32m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
APP_DIR=""
|
||||
TEMP_DIR=""
|
||||
BACKUP_DIR=""
|
||||
ROLLBACK_REQUIRED=false
|
||||
|
||||
echo -e "${BLUE}=== Starting PromdataPanel Update ===${NC}"
|
||||
|
||||
cleanup() {
|
||||
if [ -n "${TEMP_DIR}" ] && [ -d "${TEMP_DIR}" ]; then
|
||||
rm -rf "${TEMP_DIR}"
|
||||
fi
|
||||
}
|
||||
|
||||
rollback() {
|
||||
if [ "$ROLLBACK_REQUIRED" != true ] || [ -z "${BACKUP_DIR}" ] || [ ! -d "${BACKUP_DIR}" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Update failed. Restoring previous application state...${NC}"
|
||||
rsync -a --delete --exclude '.env' "${BACKUP_DIR}/" "${APP_DIR}/"
|
||||
}
|
||||
|
||||
trap 'rollback' ERR
|
||||
trap cleanup EXIT
|
||||
|
||||
validate_app_dir() {
|
||||
local dir="$1"
|
||||
[ -n "$dir" ] || return 1
|
||||
[ -d "$dir" ] || return 1
|
||||
[ -f "$dir/package.json" ] || return 1
|
||||
[ -f "$dir/server/index.js" ] || return 1
|
||||
[ -f "$dir/public/index.html" ] || return 1
|
||||
return 0
|
||||
}
|
||||
|
||||
detect_app_dir() {
|
||||
local service_dir=""
|
||||
if command -v systemctl >/dev/null 2>&1 && systemctl list-unit-files | grep -q "^${SERVICE_NAME}\.service"; then
|
||||
echo "Detecting application directory from systemd service..."
|
||||
service_dir=$(systemctl show -p WorkingDirectory "$SERVICE_NAME" | cut -d= -f2-)
|
||||
if validate_app_dir "$service_dir"; then
|
||||
APP_DIR="$service_dir"
|
||||
return
|
||||
fi
|
||||
fi
|
||||
|
||||
local current_dir
|
||||
current_dir=$(pwd)
|
||||
if validate_app_dir "$current_dir"; then
|
||||
APP_DIR="$current_dir"
|
||||
return
|
||||
fi
|
||||
|
||||
if validate_app_dir "$DEFAULT_APP_DIR"; then
|
||||
APP_DIR="$DEFAULT_APP_DIR"
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${RED}Error: Could not locate a valid PromdataPanel application directory.${NC}"
|
||||
echo -e "${YELLOW}Expected markers: package.json, server/index.js, public/index.html${NC}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
ensure_tool() {
|
||||
local cmd="$1"
|
||||
if command -v "$cmd" >/dev/null 2>&1; then
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}${cmd} is not installed. Attempting to install it...${NC}"
|
||||
if command -v apt-get >/dev/null 2>&1; then
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y "$cmd"
|
||||
elif command -v dnf >/dev/null 2>&1; then
|
||||
sudo dnf install -y "$cmd"
|
||||
elif command -v yum >/dev/null 2>&1; then
|
||||
sudo yum install -y "$cmd"
|
||||
elif command -v apk >/dev/null 2>&1; then
|
||||
sudo apk add "$cmd"
|
||||
else
|
||||
echo -e "${RED}Error: '${cmd}' is not installed and could not be auto-installed.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
update_from_git() {
|
||||
echo -e "${BLUE}Git repository detected. Pulling latest code...${NC}"
|
||||
if [ -n "$(git status --porcelain)" ]; then
|
||||
echo -e "${RED}Error: Working tree has local changes. Commit or stash them before updating.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
git pull --ff-only
|
||||
}
|
||||
|
||||
update_from_zip() {
|
||||
echo -e "${BLUE}No git repository found. Updating via ZIP archive with staging and rollback...${NC}"
|
||||
ensure_tool curl
|
||||
ensure_tool unzip
|
||||
ensure_tool rsync
|
||||
|
||||
TEMP_DIR=$(mktemp -d "${TMPDIR:-/tmp}/promdatapanel-update-XXXXXX")
|
||||
BACKUP_DIR="${TEMP_DIR}/backup"
|
||||
local archive_path="${TEMP_DIR}/latest.zip"
|
||||
local extracted_folder=""
|
||||
local staging_dir=""
|
||||
|
||||
echo "Downloading latest version (main branch)..."
|
||||
curl -fL "$ZIP_URL" -o "$archive_path"
|
||||
|
||||
echo "Extracting archive..."
|
||||
unzip -q "$archive_path" -d "$TEMP_DIR"
|
||||
extracted_folder=$(find "$TEMP_DIR" -mindepth 1 -maxdepth 1 -type d ! -name backup | head -n 1)
|
||||
|
||||
if ! validate_app_dir "$extracted_folder"; then
|
||||
echo -e "${RED}Extraction failed or archive structure is invalid.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
staging_dir="${TEMP_DIR}/staging"
|
||||
mkdir -p "$staging_dir"
|
||||
rsync -a --exclude '.git' "$extracted_folder/" "$staging_dir/"
|
||||
|
||||
if [ -f "${APP_DIR}/.env" ]; then
|
||||
cp "${APP_DIR}/.env" "${staging_dir}/.env"
|
||||
fi
|
||||
|
||||
echo "Installing dependencies in staging directory..."
|
||||
(
|
||||
cd "$staging_dir"
|
||||
npm install --production
|
||||
)
|
||||
|
||||
echo "Creating rollback backup..."
|
||||
rsync -a --delete --exclude '.env' "${APP_DIR}/" "${BACKUP_DIR}/"
|
||||
|
||||
echo "Applying staged update..."
|
||||
ROLLBACK_REQUIRED=true
|
||||
rsync -a --delete --exclude '.env' "${staging_dir}/" "${APP_DIR}/"
|
||||
}
|
||||
|
||||
restart_service() {
|
||||
if command -v systemctl >/dev/null 2>&1 && systemctl is-active --quiet "$SERVICE_NAME"; then
|
||||
echo -e "${BLUE}Restarting systemd service: ${SERVICE_NAME}...${NC}"
|
||||
sudo systemctl restart "$SERVICE_NAME"
|
||||
return
|
||||
fi
|
||||
|
||||
if command -v pm2 >/dev/null 2>&1 && pm2 list | grep -q "$SERVICE_NAME"; then
|
||||
echo -e "${BLUE}Restarting with PM2...${NC}"
|
||||
pm2 restart "$SERVICE_NAME"
|
||||
return
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Warning: Could not detect an active systemd service or PM2 process named '${SERVICE_NAME}'.${NC}"
|
||||
echo -e "${YELLOW}Please restart the application manually.${NC}"
|
||||
}
|
||||
|
||||
detect_app_dir
|
||||
echo -e "${BLUE}Application directory: ${APP_DIR}${NC}"
|
||||
cd "$APP_DIR"
|
||||
|
||||
if [ -d ".git" ]; then
|
||||
update_from_git
|
||||
echo -e "${BLUE}Updating npm dependencies...${NC}"
|
||||
npm install --production
|
||||
else
|
||||
update_from_zip
|
||||
fi
|
||||
|
||||
restart_service
|
||||
ROLLBACK_REQUIRED=false
|
||||
|
||||
echo -e "${GREEN}=== Update successfully finished ===${NC}"
|
||||
Reference in New Issue
Block a user