流式新版

This commit is contained in:
wood chen 2024-01-27 23:41:14 +08:00
parent 1320f1e0ee
commit 7c80c88073
52 changed files with 926 additions and 1057 deletions

View File

@ -1,26 +0,0 @@
---
name: 文档改善
about: 欢迎分享您的文档改善建议!
title: "📝 文档改善"
labels: ["documentation"]
---
# 文档改善建议 📝
欢迎在此分享您对文档的改善建议,我们期待听到您的想法和建议。
## 您的建议是什么? 🤔
请简要描述您的文档改善建议,包括您的目标和想法。
如果您的建议是解决某个特定问题的,请尽可能提供更多的上下文和细节。
## 您的建议有哪些优势? 🌟
请简要描述您的建议的优势和特点,比如:
- 是否可以提高文档的可读性和易用性?
- 是否可以使文档更加详细和准确?
- 是否可以让文档更好地反映项目的实际情况?
感谢您的分享和支持!🙏

View File

@ -1,19 +0,0 @@
---
name: 功能改善
about: 欢迎分享您的改善建议!
title: "🚀 功能改善"
labels: ["enhancement"]
---
# 功能改善建议 🚀
欢迎在此分享您对功能的改善建议,我们期待听到您的想法和建议。
## 您的建议是什么? 🤔
请简要描述您的功能改善建议,包括您的目标和想法。
如果您的建议是解决某个特定问题的,请尽可能提供更多的上下文和细节。
感谢您的分享和支持!🙏

View File

@ -1,26 +0,0 @@
---
name: 错误报告
about: 提出关于此项目的错误报告
title: "🐞 错误报告"
labels: ["bug"]
---
# 错误报告 🐞
如果您在使用此项目时遇到了错误,请在此报告,我们会尽快解决此问题。
## 错误描述 🤔
请详细地描述您遇到的问题,包括出现问题的环境和步骤,以及您已经尝试过的解决方法。
另外,如果您在解决问题时已经查看过其他 GitHub Issue请务必在文本中说明并引用相关信息。
## 附加信息 📝
请提供以下信息以帮助我们更快地解决问题:
- 输出日志,包括错误信息和堆栈跟踪
- 相关的代码片段或文件
- 您的操作系统、软件版本等环境信息
感谢您的反馈!🙏

View File

@ -1,29 +0,0 @@
---
name: 项目维护
about: 欢迎提交您的项目维护问题和建议!
title: "🔧 项目维护"
labels: ["maintenance"]
---
# 项目维护问题和建议 🔧
欢迎在此分享您对项目维护的问题和建议,我们期待听到您的想法和建议。
## 您的问题或建议是什么? 🤔
请简要描述您遇到的项目维护问题或者您的项目维护建议,包括您的目标和想法。
注意:如果您的建议涉及到以下方面,请在描述中加以说明,以帮助我们更好地理解您的意见。
- 代码重构
- 设计模式加强
- 优化算法
- 依赖升级
## 您期望的解决方案是什么? 💡
请简要描述您期望的解决方案,包括您的期望和想法。
如果您期望的解决方案是解决某个特定问题的,请尽可能提供更多的上下文和细节。
感谢您的分享和支持!🙏

View File

@ -1,26 +0,0 @@
---
name: 部署问题反馈
about: 如果您在部署中遇到任何问题,欢迎在这里与我们交流。
title: "🚰 部署问题反馈"
labels: ["question"]
---
# 问题交流 💬
欢迎在此提交您遇到的问题,我们会尽快回复您并提供帮助。
## 问题描述 🤔
请详细描述您遇到的问题,包括出现问题的环境和步骤,以及您已经尝试过的解决方法。
如果您在解决问题时已经查看过其他 GitHub Issue请务必在文本中说明并引用相关信息。
## 附加信息 📝
为了更好地了解您遇到的问题,我们需要您提供以下信息:
- 输出日志,包括错误信息和堆栈跟踪。
- 相关的代码片段或文件。
- 操作系统、golang 版本等环境信息。
感谢您的反馈和支持!🙏

View File

@ -1,6 +0,0 @@
<!-- 请务必在创建PR前在右侧 Labels 选项中加上label的其中一个: [feature]、[fix]、[documentation]、[dependencies]、[test] 。以便于Actions自动生成Releases时自动对PR进行归类。-->
## 描述
请简要描述此Pull Request中的更改。
## 相关问题
- [问题编号](问题链接)

View File

@ -1,43 +0,0 @@
# Configuration for Release Drafter: https://github.com/toolmantim/release-drafter
name-template: 'v$NEXT_PATCH_VERSION 🌈'
tag-template: 'v$NEXT_PATCH_VERSION'
version-template: $MAJOR.$MINOR.$PATCH
# Emoji reference: https://gitmoji.carloscuesta.me/
categories:
- title: 🚀 Features
labels:
- 'feature'
- 'enhancement'
- 'kind/feature'
- title: 🚑️ Bug Fixes
labels:
- 'fix'
- 'bugfix'
- 'bug'
- 'regression'
- 'kind/bug'
- title: 📝 Documentation updates
labels:
- 'doc'
- 'documentation'
- 'kind/doc'
- title: 👷 Maintenance
labels:
- refactor
- chore
- dependencies
- 'kind/chore'
- 'kind/dep'
- title: 🚦 Tests
labels:
- test
- tests
exclude-labels:
- reverted
- no-changelog
- skip-changelog
- invalid
change-template: '* $TITLE (#$NUMBER) @$AUTHOR'
template: |
## Whats Changed
$CHANGES

View File

@ -1,39 +0,0 @@
name: Build Release
on:
release:
types: [created,published]
permissions:
contents: read
jobs:
build-go-binary:
permissions:
contents: write # for build-go-binary
runs-on: ubuntu-latest
strategy:
matrix:
goos: [ linux, windows, darwin ] # 需要打包的系统
goarch: [ amd64, arm64 ] # 需要打包的架构
exclude: # 排除某些平台和架构
- goarch: arm64
goos: windows
steps:
- name: Checkout the code
uses: actions/checkout@v2
- name: Create version file
run: echo ${{ github.event.release.tag_name }} > VERSION
- name: Parallel build
uses: wangyoucao577/go-release-action@v1.30
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
goos: ${{ matrix.goos }}
goarch: ${{ matrix.goarch }}
goversion: 1.18
pre_command: export CGO_ENABLED=0 && export GODEBUG=http2client=0
executable_compression: "upx -9"
md5sum: false
project_path: "./code"
binary_name: "feishu-chatgpt"
extra_files: ./code/config.example.yaml readme.md LICENSE ./code/role_list.yaml

View File

@ -1,59 +0,0 @@
name: build docker image
# release 事件触发 + Master 分支触发 + 手动触发
on:
# push:
# branches:
# - master
release:
types: [created,published]
# 可以手动触发
workflow_dispatch:
inputs:
logLevel:
description: 'Log level'
required: true
default: 'warning'
tags:
description: 'Test scenario tags'
jobs:
buildx:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Inject slug/short variables
uses: rlespinasse/github-slug-action@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Available platforms
run: echo ${{ steps.buildx.outputs.platforms }}
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
# 所需要的体系结构,可以在 Available platforms 步骤中获取所有的可用架构
platforms: linux/amd64,linux/arm64/v8
# 镜像推送时间
push: ${{ github.event_name != 'pull_request' }}
# 给清单打上多个标签
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/${{ env.GITHUB_REPOSITORY_NAME_PART }}:${{ env.GITHUB_REF_NAME }}
${{ secrets.DOCKERHUB_USERNAME }}/${{ env.GITHUB_REPOSITORY_NAME_PART }}:latest

42
.github/workflows/docker.yml vendored Normal file
View File

@ -0,0 +1,42 @@
name: Docker
on:
push:
branches:
- main
tags:
- v*
env:
IMAGE_NAME: oapi-feishu
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: 检出代码库
uses: actions/checkout@v3
- name: 构建镜像
run: docker build . --file Dockerfile --tag $IMAGE_NAME
- name: 登录到镜像仓库
run: echo "${{ secrets.ACCESS_TOKEN }}" | docker login -u woodchen --password-stdin
- name: 推送镜像
run: |
IMAGE_ID=woodchen/$IMAGE_NAME
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
# 从 GitHub 事件负载中获取分支名
BRANCH_NAME=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# 对于除了 "main" 分支和标签以外的分支,使用 "latest" 版本号
VERSION=$(if [ "$BRANCH_NAME" == "main" ]; then echo "latest"; else echo $BRANCH_NAME; fi)
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:$VERSION
docker push $IMAGE_ID:$VERSION

View File

@ -1,38 +0,0 @@
name: Release Drafter
on:
push:
# branches to consider in the event; optional, defaults to all
branches:
- master
# pull_request event is required only for autolabeler
pull_request:
# Only following types are handled by the action, but one can default to all as well
types: [opened, reopened, synchronize]
# pull_request_target event is required for autolabeler to support PRs from forks
# pull_request_target:
# types: [opened, reopened, synchronize]
permissions:
contents: read
jobs:
update_release_draft:
permissions:
contents: write # for release-drafter/release-drafter to create a github release
pull-requests: write # for release-drafter/release-drafter to add label to PR
runs-on: ubuntu-latest
steps:
# (Optional) GitHub Enterprise requires GHE_HOST variable set
#- name: Set GHE_HOST
# run: |
# echo "GHE_HOST=${GITHUB_SERVER_URL##https:\/\/}" >> $GITHUB_ENV
# Drafts your next Release notes as Pull Requests are merged into "master"
- uses: release-drafter/release-drafter@v5
# (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml
# with:
# config-name: my-config.yml
# disable-autolabeler: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

40
.gitignore vendored
View File

@ -1,40 +0,0 @@
### Go template
# If you prefer the allow list template instead of the deny list, see community template:
# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
#
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
# Go workspace file
go.work
./code/target
.idea
.vscode
.s
config.yaml
/code/target/
start-feishubot
.env
docker.md
# Mac OS
.DS_Store
**/.DS_Store
*.pem

View File

@ -1,10 +1,12 @@
# 是否启用日志。
ENABLE_LOG: true
# 飞书
APP_ID: cli_axxx
APP_SECRET: xxx
APP_ENCRYPT_KEY: xxx
APP_VERIFICATION_TOKEN: xxx
# 请确保和飞书应用管理平台中的设置一致
BOT_NAME: chatGpt
# 请确保和飞书应用管理平台中的设置一致。这里建议直接用 Feishu-OpenAI-Stream-Chatbot 作为机器人名称这样的话如果你有多个bot就好区分
BOT_NAME: xxx
# openAI key 支持负载均衡 可以填写多个key 用逗号分隔
OPENAI_KEY: sk-xxx,sk-xxx,sk-xxx
# 服务器配置
@ -17,11 +19,20 @@ KEY_FILE: key.pem
API_URL: https://oapi.czl.net
# 代理设置, 例如 "http://127.0.0.1:7890", ""代表不使用代理
HTTP_PROXY: ""
# 访问OpenAi的 普通 Http请求的超时时间单位秒不配置的话默认为 550 秒
OPENAI_HTTP_CLIENT_TIMEOUT:
# openai 指定模型, 更多见 https://platform.openai.com/docs/models/model-endpoint-compatibility 中 /v1/chat/completions
OPENAI_MODEL: gpt-3.5-turbo
# AZURE OPENAI
AZURE_ON: false # set true to use Azure rather than OpenAI
AZURE_ON: false # set to true to use Azure rather than OpenAI
AZURE_API_VERSION: 2023-03-15-preview # 2023-03-15-preview or 2022-12-01 refer https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference#completions
AZURE_RESOURCE_NAME: xxxx # you can find in endpoint url. Usually looks like https://{RESOURCE_NAME}.openai.azure.com
AZURE_DEPLOYMENT_NAME: xxxx # usually looks like ...openai.azure.com/openai/deployments/{DEPLOYMENT_NAME}/chat/completions.
AZURE_OPENAI_TOKEN: xxxx # Authentication key. We can use Azure Active Directory Authentication(TBD).
## 访问控制
# 是否启用访问控制。默认不启用。
ACCESS_CONTROL_ENABLE: false
# 每个用户每天最多问多少个问题。默认为不限制. 配置成为小于等于0表示不限制。
ACCESS_CONTROL_MAX_COUNT_PER_USER_PER_DAY: 0

View File

@ -8,10 +8,12 @@ require (
github.com/duke-git/lancet/v2 v2.1.17
github.com/gin-gonic/gin v1.8.2
github.com/google/uuid v1.3.0
github.com/k0kubun/pp/v3 v3.2.0
github.com/larksuite/oapi-sdk-gin v1.0.0
github.com/pandodao/tokenizer-go v0.2.0
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/pion/opus v0.0.0-20230123082803-1052c3e89e58
github.com/sashabaranov/go-openai v1.7.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.14.0
gopkg.in/yaml.v2 v2.4.0
@ -33,6 +35,7 @@ require (
github.com/json-iterator/go v1.1.12 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
@ -51,5 +54,6 @@ require (
golang.org/x/text v0.8.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@ -170,6 +170,8 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/k0kubun/pp/v3 v3.2.0 h1:h33hNTZ9nVFNP3u2Fsgz8JXiF5JINoZfFq4SvKJwNcs=
github.com/k0kubun/pp/v3 v3.2.0/go.mod h1:ODtJQbQcIRfAD3N+theGCV1m/CBxweERz2dapdz1EwA=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
@ -188,6 +190,9 @@ github.com/leodido/go-urn v1.2.1 h1:BqpAaACuzVSgi/VLzGZIobT2z4v53pjosyNd9Yv6n/w=
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
@ -217,6 +222,10 @@ github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0 h1:FCbCCtXNOY3UtUuHUYaghJg4y7Fd14rXifAYUAtL9R8=
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/sashabaranov/go-openai v1.7.0 h1:D1dBXoZhtf/aKNu6WFf0c7Ah2NM30PZ/3Mqly6cZ7fk=
github.com/sashabaranov/go-openai v1.7.0/go.mod h1:lj5b/K+zjTSFxVLijLSTDZuP7adOgerWeFyZLUhAKRg=
github.com/sashabaranov/go-openai v1.9.0 h1:NoiO++IISxxJ1pRc0n7uZvMGMake0G+FJ1XPwXtprsA=
github.com/sashabaranov/go-openai v1.9.0/go.mod h1:lj5b/K+zjTSFxVLijLSTDZuP7adOgerWeFyZLUhAKRg=
github.com/spf13/afero v1.9.3 h1:41FoI0fD7OR7mGcKE/aOiLkGreyf8ifIOQmJANWogMk=
github.com/spf13/afero v1.9.3/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w=
@ -570,6 +579,8 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EV
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=

View File

@ -1,38 +0,0 @@
package handlers
import (
"context"
"start-feishubot/services"
"start-feishubot/services/openai"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
)
// AIModeChooseKind is the kind of card action for choosing AI mode
func NewAIModeCardHandler(cardMsg CardMsg,
m MessageHandler) CardHandlerFunc {
return func(ctx context.Context, cardAction *larkcard.CardAction) (interface{}, error) {
if cardMsg.Kind == AIModeChooseKind {
newCard, err, done := CommonProcessAIMode(cardMsg, cardAction,
m.sessionCache)
if done {
return newCard, err
}
return nil, nil
}
return nil, ErrNextHandler
}
}
// CommonProcessAIMode is the common process for choosing AI mode
func CommonProcessAIMode(msg CardMsg, cardAction *larkcard.CardAction,
cache services.SessionServiceCacheInterface) (interface{},
error, bool) {
option := cardAction.Action.Option
replyMsg(context.Background(), "已选择AI模式:"+option,
&msg.MsgId)
cache.SetAIMode(msg.SessionId, openai.AIModeMap[option])
return nil, nil, true
}

View File

@ -2,10 +2,8 @@ package handlers
import (
"context"
"start-feishubot/services"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
"start-feishubot/services"
)
func NewClearCardHandler(cardMsg CardMsg, m MessageHandler) CardHandlerFunc {

View File

@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
)
@ -23,16 +22,13 @@ func NewCardHandler(m MessageHandler) CardHandlerFunc {
NewPicModeChangeHandler,
NewRoleTagCardHandler,
NewRoleCardHandler,
NewAIModeCardHandler,
}
return func(ctx context.Context, cardAction *larkcard.CardAction) (interface{}, error) {
var cardMsg CardMsg
actionValue := cardAction.Action.Value
actionValueJson, _ := json.Marshal(actionValue)
if err := json.Unmarshal(actionValueJson, &cardMsg); err != nil {
return nil, err
}
json.Unmarshal(actionValueJson, &cardMsg)
//pp.Println(cardMsg)
for _, handler := range handlers {
h := handler(cardMsg, m)

View File

@ -2,10 +2,8 @@ package handlers
import (
"context"
"start-feishubot/services"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
"start-feishubot/services"
)
func NewPicResolutionHandler(cardMsg CardMsg, m MessageHandler) CardHandlerFunc {
@ -30,7 +28,6 @@ func NewPicModeChangeHandler(cardMsg CardMsg, m MessageHandler) CardHandlerFunc
return nil, ErrNextHandler
}
}
func NewPicTextMoreHandler(cardMsg CardMsg, m MessageHandler) CardHandlerFunc {
return func(ctx context.Context, cardAction *larkcard.CardAction) (interface{}, error) {
if cardMsg.Kind == PicTextMoreKind {

View File

@ -2,12 +2,10 @@ package handlers
import (
"context"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
"start-feishubot/initialization"
"start-feishubot/services"
"start-feishubot/services/openai"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
)
func NewRoleTagCardHandler(cardMsg CardMsg,

View File

@ -15,108 +15,9 @@ func msgFilter(msg string) string {
return regex.ReplaceAllString(msg, "")
}
// Parse rich text json to text
func parsePostContent(content string) string {
/*
{
"title":"我是一个标题",
"content":[
[
{
"tag":"text",
"text":"第一行 :",
"style": ["bold", "underline"]
},
{
"tag":"a",
"href":"http://www.feishu.cn",
"text":"超链接",
"style": ["bold", "italic"]
},
{
"tag":"at",
"user_id":"@_user_1",
"user_name":"",
"style": []
}
],
[
{
"tag":"img",
"image_key":"img_47354fbc-a159-40ed-86ab-2ad0f1acb42g"
}
],
[
{
"tag":"text",
"text":"第二行:",
"style": ["bold", "underline"]
},
{
"tag":"text",
"text":"文本测试",
"style": []
}
],
[
{
"tag":"img",
"image_key":"img_47354fbc-a159-40ed-86ab-2ad0f1acb42g"
}
],
[
{
"tag":"media",
"file_key": "file_v2_0dcdd7d9-fib0-4432-a519-41d25aca542j",
"image_key": "img_7ea74629-9191-4176-998c-2e603c9c5e8g"
}
],
[
{
"tag": "emotion",
"emoji_type": "SMILE"
}
]
]
}
*/
var contentMap map[string]interface{}
err := json.Unmarshal([]byte(content), &contentMap)
if err != nil {
fmt.Println(err)
}
if contentMap["content"] == nil {
return ""
}
var text string
// deal with title
if contentMap["title"] != nil && contentMap["title"] != "" {
text += contentMap["title"].(string) + "\n"
}
// deal with content
contentList := contentMap["content"].([]interface{})
for _, v := range contentList {
for _, v1 := range v.([]interface{}) {
if v1.(map[string]interface{})["tag"] == "text" {
text += v1.(map[string]interface{})["text"].(string)
}
}
// add new line
text += "\n"
}
return msgFilter(text)
}
func parseContent(content, msgType string) string {
func parseContent(content string) string {
//"{\"text\":\"@_user_1 hahaha\"}",
//only get text content hahaha
if msgType == "post" {
return parsePostContent(content)
}
var contentMap map[string]interface{}
err := json.Unmarshal([]byte(content), &contentMap)
if err != nil {
@ -128,7 +29,6 @@ func parseContent(content, msgType string) string {
text := contentMap["text"].(string)
return msgFilter(text)
}
func processMessage(msg interface{}) (string, error) {
msg = strings.TrimSpace(msg.(string))
msgB, err := json.Marshal(msg)

View File

@ -1,69 +0,0 @@
package handlers
import (
"context"
"fmt"
"os"
"start-feishubot/initialization"
"start-feishubot/utils/audio"
larkim "github.com/larksuite/oapi-sdk-go/v3/service/im/v1"
)
type AudioAction struct { /*语音*/
}
func (*AudioAction) Execute(a *ActionInfo) bool {
check := AzureModeCheck(a)
if !check {
return true
}
// 只有私聊才解析语音,其他不解析
if a.info.handlerType != UserHandler {
return true
}
//判断是否是语音
if a.info.msgType == "audio" {
fileKey := a.info.fileKey
//fmt.Printf("fileKey: %s \n", fileKey)
msgId := a.info.msgId
//fmt.Println("msgId: ", *msgId)
req := larkim.NewGetMessageResourceReqBuilder().MessageId(
*msgId).FileKey(fileKey).Type("file").Build()
resp, err := initialization.GetLarkClient().Im.MessageResource.Get(context.Background(), req)
//fmt.Println(resp, err)
if err != nil {
fmt.Println(err)
return true
}
f := fmt.Sprintf("%s.ogg", fileKey)
resp.WriteFile(f)
defer os.Remove(f)
//fmt.Println("f: ", f)
output := fmt.Sprintf("%s.mp3", fileKey)
// 等待转换完成
audio.OggToWavByPath(f, output)
defer os.Remove(output)
//fmt.Println("output: ", output)
text, err := a.handler.gpt.AudioToText(output)
if err != nil {
fmt.Println(err)
sendMsg(*a.ctx, fmt.Sprintf("🤖️:语音转换失败,请稍后再试~\n错误信息: %v", err), a.info.msgId)
return false
}
replyMsg(*a.ctx, fmt.Sprintf("🤖️:%s", text), a.info.msgId)
//fmt.Println("text: ", text)
a.info.qParsed = text
return true
}
return true
}

View File

@ -3,12 +3,10 @@ package handlers
import (
"context"
"fmt"
larkim "github.com/larksuite/oapi-sdk-go/v3/service/im/v1"
"start-feishubot/initialization"
"start-feishubot/services/openai"
"start-feishubot/utils"
larkim "github.com/larksuite/oapi-sdk-go/v3/service/im/v1"
)
type MsgInfo struct {
@ -16,6 +14,7 @@ type MsgInfo struct {
msgType string
msgId *string
chatId *string
userId string
qParsed string
fileKey string
imageKey string
@ -153,15 +152,3 @@ func (*RoleListAction) Execute(a *ActionInfo) bool {
}
return true
}
type AIModeAction struct { /*AI模式*/
}
func (*AIModeAction) Execute(a *ActionInfo) bool {
if _, foundMode := utils.EitherCutPrefix(a.info.qParsed,
"/ai_mode", "AI模式"); foundMode {
SendAIModeListsCard(*a.ctx, a.info.sessionId, a.info.msgId, openai.AIModeStrs)
return false
}
return true
}

View File

@ -1,41 +1,157 @@
package handlers
import (
"encoding/json"
"fmt"
"github.com/k0kubun/pp/v3"
"log"
"start-feishubot/initialization"
"start-feishubot/services/accesscontrol"
"start-feishubot/services/chatgpt"
"start-feishubot/services/openai"
"strings"
"time"
)
type MessageAction struct { /*消息*/
chatgpt *chatgpt.ChatGPT
}
func (*MessageAction) Execute(a *ActionInfo) bool {
func (m *MessageAction) Execute(a *ActionInfo) bool {
// Add access control
if initialization.GetConfig().AccessControlEnable &&
!accesscontrol.CheckAllowAccessThenIncrement(&a.info.userId) {
msg := fmt.Sprintf("UserId: 【%s】 has accessed max count today! Max access count today %s: 【%d】",
a.info.userId, accesscontrol.GetCurrentDateFlag(), initialization.GetConfig().AccessControlMaxCountPerUserPerDay)
_ = sendMsg(*a.ctx, msg, a.info.chatId)
return false
}
//s := "快速响应,用于测试: " + time.Now().String() +
// " accesscontrol.currentDate " + accesscontrol.GetCurrentDateFlag()
//_ = sendMsg(*a.ctx, s, a.info.chatId)
//log.Println(s)
//return false
cardId, err2 := sendOnProcess(a)
if err2 != nil {
return false
}
answer := ""
chatResponseStream := make(chan string)
done := make(chan struct{}) // 添加 done 信号,保证 goroutine 正确退出
noContentTimeout := time.AfterFunc(10*time.Second, func() {
pp.Println("no content timeout")
close(done)
err := updateFinalCard(*a.ctx, "请求超时", cardId)
if err != nil {
return
}
return
})
defer noContentTimeout.Stop()
msg := a.handler.sessionCache.GetMsg(*a.info.sessionId)
msg = append(msg, openai.Messages{
Role: "user", Content: a.info.qParsed,
})
// get ai mode as temperature
aiMode := a.handler.sessionCache.GetAIMode(*a.info.sessionId)
completions, err := a.handler.gpt.Completions(msg, aiMode)
if err != nil {
replyMsg(*a.ctx, fmt.Sprintf(
"🤖️:消息机器人摆烂了,请稍后再试~\n错误信息: %v", err), a.info.msgId)
return false
go func() {
defer func() {
if err := recover(); err != nil {
err := updateFinalCard(*a.ctx, "聊天失败", cardId)
if err != nil {
printErrorMessage(a, msg, err)
return
}
}
}()
//log.Printf("UserId: %s , Request: %s", a.info.userId, msg)
if err := m.chatgpt.StreamChat(*a.ctx, msg, chatResponseStream); err != nil {
err := updateFinalCard(*a.ctx, "聊天失败", cardId)
if err != nil {
printErrorMessage(a, msg, err)
return
}
close(done) // 关闭 done 信号
}
close(done) // 关闭 done 信号
}()
ticker := time.NewTicker(700 * time.Millisecond)
defer ticker.Stop() // 注意在函数结束时停止 ticker
go func() {
for {
select {
case <-done:
return
case <-ticker.C:
err := updateTextCard(*a.ctx, answer, cardId)
if err != nil {
printErrorMessage(a, msg, err)
return
}
}
}
}()
for {
select {
case res, ok := <-chatResponseStream:
if !ok {
return false
}
noContentTimeout.Stop()
answer += res
//pp.Println("answer", answer)
case <-done: // 添加 done 信号的处理
err := updateFinalCard(*a.ctx, answer, cardId)
if err != nil {
printErrorMessage(a, msg, err)
return false
}
ticker.Stop()
msg := append(msg, openai.Messages{
Role: "assistant", Content: answer,
})
a.handler.sessionCache.SetMsg(*a.info.sessionId, msg)
close(chatResponseStream)
//if new topic
//if len(msg) == 2 {
// //fmt.Println("new topic", msg[1].Content)
// //updateNewTextCard(*a.ctx, a.info.sessionId, a.info.msgId,
// // completions.Content)
//}
log.Printf("\n\n\n")
log.Printf("Success request: UserId: %s , Request: %s , Response: %s", a.info.userId, msg, answer)
jsonByteArray, err := json.Marshal(msg)
if err != nil {
log.Printf("Error marshaling JSON request: UserId: %s , Request: %s , Response: %s", a.info.userId, jsonByteArray, answer)
}
jsonStr := strings.ReplaceAll(string(jsonByteArray), "\\n", "")
jsonStr = strings.ReplaceAll(jsonStr, "\n", "")
log.Printf("\n\n\n")
log.Printf("Success request plain jsonStr: UserId: %s , Request: %s , Response: %s",
a.info.userId, jsonStr, answer)
return false
}
}
msg = append(msg, completions)
a.handler.sessionCache.SetMsg(*a.info.sessionId, msg)
//if new topic
if len(msg) == 2 {
//fmt.Println("new topic", msg[1].Content)
sendNewTopicCard(*a.ctx, a.info.sessionId, a.info.msgId,
completions.Content)
return false
}
err = replyMsg(*a.ctx, completions.Content, a.info.msgId)
if err != nil {
replyMsg(*a.ctx, fmt.Sprintf(
"🤖️:消息机器人摆烂了,请稍后再试~\n错误信息: %v", err), a.info.msgId)
return false
}
return true
}
func printErrorMessage(a *ActionInfo, msg []openai.Messages, err error) {
log.Printf("Failed request: UserId: %s , Request: %s , Err: %s", a.info.userId, msg, err)
}
func sendOnProcess(a *ActionInfo) (*string, error) {
// send 正在处理中
cardId, err := sendOnProcessCard(*a.ctx, a.info.sessionId, a.info.msgId)
if err != nil {
return nil, err
}
return cardId, nil
}

View File

@ -1,107 +0,0 @@
package handlers
import (
"context"
"fmt"
"os"
"start-feishubot/initialization"
"start-feishubot/services"
"start-feishubot/services/openai"
"start-feishubot/utils"
larkim "github.com/larksuite/oapi-sdk-go/v3/service/im/v1"
)
type PicAction struct { /*图片*/
}
func (*PicAction) Execute(a *ActionInfo) bool {
check := AzureModeCheck(a)
if !check {
return true
}
// 开启图片创作模式
if _, foundPic := utils.EitherTrimEqual(a.info.qParsed,
"/picture", "图片创作"); foundPic {
a.handler.sessionCache.Clear(*a.info.sessionId)
a.handler.sessionCache.SetMode(*a.info.sessionId,
services.ModePicCreate)
a.handler.sessionCache.SetPicResolution(*a.info.sessionId,
services.Resolution256)
sendPicCreateInstructionCard(*a.ctx, a.info.sessionId,
a.info.msgId)
return false
}
mode := a.handler.sessionCache.GetMode(*a.info.sessionId)
//fmt.Println("mode: ", mode)
// 收到一张图片,且不在图片创作模式下, 提醒是否切换到图片创作模式
if a.info.msgType == "image" && mode != services.ModePicCreate {
sendPicModeCheckCard(*a.ctx, a.info.sessionId, a.info.msgId)
return false
}
if a.info.msgType == "image" && mode == services.ModePicCreate {
//保存图片
imageKey := a.info.imageKey
//fmt.Printf("fileKey: %s \n", imageKey)
msgId := a.info.msgId
//fmt.Println("msgId: ", *msgId)
req := larkim.NewGetMessageResourceReqBuilder().MessageId(
*msgId).FileKey(imageKey).Type("image").Build()
resp, err := initialization.GetLarkClient().Im.MessageResource.Get(context.Background(), req)
//fmt.Println(resp, err)
if err != nil {
//fmt.Println(err)
replyMsg(*a.ctx, fmt.Sprintf("🤖️:图片下载失败,请稍后再试~\n 错误信息: %v", err),
a.info.msgId)
return false
}
f := fmt.Sprintf("%s.png", imageKey)
resp.WriteFile(f)
defer os.Remove(f)
resolution := a.handler.sessionCache.GetPicResolution(*a.
info.sessionId)
openai.ConvertJpegToPNG(f)
openai.ConvertToRGBA(f, f)
//图片校验
err = openai.VerifyPngs([]string{f})
if err != nil {
replyMsg(*a.ctx, fmt.Sprintf("🤖️:无法解析图片,请发送原图并尝试重新操作~"),
a.info.msgId)
return false
}
bs64, err := a.handler.gpt.GenerateOneImageVariation(f, resolution)
if err != nil {
replyMsg(*a.ctx, fmt.Sprintf(
"🤖️:图片生成失败,请稍后再试~\n错误信息: %v", err), a.info.msgId)
return false
}
replayImagePlainByBase64(*a.ctx, bs64, a.info.msgId)
return false
}
// 生成图片
if mode == services.ModePicCreate {
resolution := a.handler.sessionCache.GetPicResolution(*a.
info.sessionId)
bs64, err := a.handler.gpt.GenerateOneImage(a.info.qParsed,
resolution)
if err != nil {
replyMsg(*a.ctx, fmt.Sprintf(
"🤖️:图片生成失败,请稍后再试~\n错误信息: %v", err), a.info.msgId)
return false
}
replayImageCardByBase64(*a.ctx, bs64, a.info.msgId, a.info.sessionId,
a.info.qParsed)
return false
}
return true
}

View File

@ -3,13 +3,14 @@ package handlers
import (
"context"
"fmt"
"strings"
"start-feishubot/initialization"
"start-feishubot/services"
"start-feishubot/services/chatgpt"
"start-feishubot/services/openai"
"strings"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
larkim "github.com/larksuite/oapi-sdk-go/v3/service/im/v1"
)
@ -40,11 +41,12 @@ func judgeMsgType(event *larkim.P2MessageReceiveV1) (string, error) {
msgType := event.Event.Message.MessageType
switch *msgType {
case "text", "image", "audio", "post":
case "text", "image", "audio":
return *msgType, nil
default:
return "", fmt.Errorf("unknown message type: %v", *msgType)
}
}
func (m MessageHandler) msgReceivedHandler(ctx context.Context, event *larkim.P2MessageReceiveV1) error {
@ -75,8 +77,9 @@ func (m MessageHandler) msgReceivedHandler(ctx context.Context, event *larkim.P2
handlerType: handlerType,
msgType: msgType,
msgId: msgId,
userId: *event.Event.Sender.SenderId.UserId,
chatId: chatId,
qParsed: strings.Trim(parseContent(*content, msgType), " "),
qParsed: strings.Trim(parseContent(*content), " "),
fileKey: parseFileKey(*content),
imageKey: parseImageKey(*content),
sessionId: sessionId,
@ -90,18 +93,16 @@ func (m MessageHandler) msgReceivedHandler(ctx context.Context, event *larkim.P2
actions := []Action{
&ProcessedUniqueAction{}, //避免重复处理
&ProcessMentionAction{}, //判断机器人是否应该被调用
&AudioAction{}, //语音处理
&EmptyAction{}, //空消息处理
&ClearAction{}, //清除消息处理
&PicAction{}, //图片处理
&AIModeAction{}, //模式切换处理
&RoleListAction{}, //角色列表处理
&HelpAction{}, //帮助处理
&BalanceAction{}, //余额处理
&RolePlayAction{}, //角色扮演处理
&MessageAction{}, //消息处理
&MessageAction{
chatgpt: chatgpt.NewGpt3(&m.config),
}, //消息处理
}
chain(data, actions...)
return nil
}

View File

@ -2,7 +2,6 @@ package handlers
import (
"context"
"start-feishubot/initialization"
"start-feishubot/services/openai"

View File

@ -6,7 +6,6 @@ import (
"encoding/base64"
"errors"
"fmt"
"start-feishubot/initialization"
"start-feishubot/services"
"start-feishubot/services/openai"
@ -27,7 +26,6 @@ var (
PicVarMoreKind = CardKind("pic_var_more") // 变量图片
RoleTagsChooseKind = CardKind("role_tags_choose") // 内置角色所属标签选择
RoleChooseKind = CardKind("role_choose") // 内置角色选择
AIModeChooseKind = CardKind("ai_mode_choose") // AI模式选择
)
var (
@ -76,14 +74,43 @@ func replyCard(ctx context.Context,
return nil
}
func newSendCard(
header *larkcard.MessageCardHeader,
elements ...larkcard.MessageCardElement) (string,
error) {
func replyCardWithBackId(ctx context.Context,
msgId *string,
cardContent string,
) (*string, error) {
client := initialization.GetLarkClient()
resp, err := client.Im.Message.Reply(ctx, larkim.NewReplyMessageReqBuilder().
MessageId(*msgId).
Body(larkim.NewReplyMessageReqBodyBuilder().
MsgType(larkim.MsgTypeInteractive).
Uuid(uuid.New().String()).
Content(cardContent).
Build()).
Build())
// 处理错误
if err != nil {
fmt.Println(err)
return nil, err
}
// 服务端错误处理
if !resp.Success() {
fmt.Println(resp.Code, resp.Msg, resp.RequestId())
return nil, errors.New(resp.Msg)
}
//ctx = context.WithValue(ctx, "SendMsgId", *resp.Data.MessageId)
//SendMsgId := ctx.Value("SendMsgId")
//pp.Println(SendMsgId)
return resp.Data.MessageId, nil
}
func newSendCard(header *larkcard.MessageCardHeader, elements ...larkcard.MessageCardElement) (string, error) {
config := larkcard.NewMessageCardConfig().
WideScreenMode(false).
EnableForward(true).
UpdateMulti(false).
UpdateMulti(true).
Build()
var aElementPool []larkcard.MessageCardElement
for _, element := range elements {
@ -99,6 +126,26 @@ func newSendCard(
String()
return cardContent, err
}
func newSendCardWithOutHeader(
elements ...larkcard.MessageCardElement) (string, error) {
config := larkcard.NewMessageCardConfig().
WideScreenMode(false).
EnableForward(true).
UpdateMulti(true).
Build()
var aElementPool []larkcard.MessageCardElement
for _, element := range elements {
aElementPool = append(aElementPool, element)
}
// 卡片消息体
cardContent, err := larkcard.NewMessageCard().
Config(config).
Elements(
aElementPool,
).
String()
return cardContent, err
}
func newSimpleSendCard(
elements ...larkcard.MessageCardElement) (string,
@ -354,7 +401,6 @@ func withPicResolutionBtn(sessionID *string) larkcard.
Build()
return actions
}
func withRoleTagsBtn(sessionID *string, tags ...string) larkcard.
MessageCardElement {
var menuOptions []MenuOption
@ -409,32 +455,6 @@ func withRoleBtn(sessionID *string, titles ...string) larkcard.
return actions
}
func withAIModeBtn(sessionID *string, aiModeStrs []string) larkcard.MessageCardElement {
var menuOptions []MenuOption
for _, label := range aiModeStrs {
menuOptions = append(menuOptions, MenuOption{
label: label,
value: label,
})
}
cancelMenu := newMenu("选择模式",
map[string]interface{}{
"value": "0",
"kind": AIModeChooseKind,
"sessionId": *sessionID,
"msgId": *sessionID,
},
menuOptions...,
)
actions := larkcard.NewMessageCardAction().
Actions([]larkcard.MessageCardActionElement{cancelMenu}).
Layout(larkcard.MessageCardActionLayoutFlow.Ptr()).
Build()
return actions
}
func replyMsg(ctx context.Context, msg string, msgId *string) error {
msg, i := processMessage(msg)
if i != nil {
@ -496,7 +516,6 @@ func uploadImage(base64Str string) (*string, error) {
}
return resp.Data.ImageKey, nil
}
func replyImage(ctx context.Context, ImageKey *string,
msgId *string) error {
//fmt.Println("sendMsg", ImageKey, msgId)
@ -530,6 +549,7 @@ func replyImage(ctx context.Context, ImageKey *string,
return errors.New(resp.Msg)
}
return nil
}
func replayImageCardByBase64(ctx context.Context, base64Str string,
@ -548,38 +568,6 @@ func replayImageCardByBase64(ctx context.Context, base64Str string,
return nil
}
func replayImagePlainByBase64(ctx context.Context, base64Str string,
msgId *string) error {
imageKey, err := uploadImage(base64Str)
if err != nil {
return err
}
//example := "img_v2_041b28e3-5680-48c2-9af2-497ace79333g"
//imageKey := &example
//fmt.Println("imageKey", *imageKey)
err = replyImage(ctx, imageKey, msgId)
if err != nil {
return err
}
return nil
}
func replayVariantImageByBase64(ctx context.Context, base64Str string,
msgId *string, sessionId *string) error {
imageKey, err := uploadImage(base64Str)
if err != nil {
return err
}
//example := "img_v2_041b28e3-5680-48c2-9af2-497ace79333g"
//imageKey := &example
//fmt.Println("imageKey", *imageKey)
err = sendVarImageCard(ctx, *imageKey, msgId, sessionId)
if err != nil {
return err
}
return nil
}
func sendMsg(ctx context.Context, msg string, chatId *string) error {
//fmt.Println("sendMsg", msg, chatId)
msg, i := processMessage(msg)
@ -616,6 +604,37 @@ func sendMsg(ctx context.Context, msg string, chatId *string) error {
return nil
}
func PatchCard(ctx context.Context, msgId *string,
cardContent string) error {
//fmt.Println("sendMsg", msg, chatId)
client := initialization.GetLarkClient()
//content := larkim.NewTextMsgBuilder().
// Text(msg).
// Build()
//fmt.Println("content", content)
resp, err := client.Im.Message.Patch(ctx, larkim.NewPatchMessageReqBuilder().
MessageId(*msgId).
Body(larkim.NewPatchMessageReqBodyBuilder().
Content(cardContent).
Build()).
Build())
// 处理错误
if err != nil {
fmt.Println(err)
return err
}
// 服务端错误处理
if !resp.Success() {
fmt.Println(resp.Code, resp.Msg, resp.RequestId())
return errors.New(resp.Msg)
}
return nil
}
func sendClearCacheCheckCard(ctx context.Context,
sessionId *string, msgId *string) {
newCard, _ := newSendCard(
@ -635,39 +654,47 @@ func sendSystemInstructionCard(ctx context.Context,
replyCard(ctx, msgId, newCard)
}
func sendPicCreateInstructionCard(ctx context.Context,
sessionId *string, msgId *string) {
newCard, _ := newSendCard(
withHeader("🖼️ 已进入图片创作模式", larkcard.TemplateBlue),
withPicResolutionBtn(sessionId),
withNote("提醒回复文本或图片让AI生成相关的图片。"))
replyCard(ctx, msgId, newCard)
func sendOnProcessCard(ctx context.Context,
sessionId *string, msgId *string) (*string, error) {
newCard, _ := newSendCardWithOutHeader(
withNote("正在思考,请稍等..."))
id, err := replyCardWithBackId(ctx, msgId, newCard)
if err != nil {
return nil, err
}
return id, nil
}
func sendPicModeCheckCard(ctx context.Context,
sessionId *string, msgId *string) {
newCard, _ := newSendCard(
withHeader("🖼️ 机器人提醒", larkcard.TemplateBlue),
withMainMd("收到图片,是否进入图片创作模式?"),
withNote("请注意,这将开始一个全新的对话,您将无法利用之前话题的历史信息"),
withPicModeDoubleCheckBtn(sessionId))
replyCard(ctx, msgId, newCard)
func updateTextCard(ctx context.Context, msg string,
msgId *string) error {
newCard, _ := newSendCardWithOutHeader(
withMainText(msg),
withNote("正在生成,请稍等..."))
err := PatchCard(ctx, msgId, newCard)
if err != nil {
return err
}
return nil
}
func sendNewTopicCard(ctx context.Context,
sessionId *string, msgId *string, content string) {
newCard, _ := newSendCard(
withHeader("👻️ 已开启新的话题", larkcard.TemplateBlue),
withMainText(content),
withNote("提醒:点击对话框参与回复,可保持话题连贯"))
replyCard(ctx, msgId, newCard)
func updateFinalCard(
ctx context.Context,
msg string,
msgId *string,
) error {
newCard, _ := newSendCardWithOutHeader(
withMainText(msg))
err := PatchCard(ctx, msgId, newCard)
if err != nil {
return err
}
return nil
}
func sendHelpCard(ctx context.Context,
sessionId *string, msgId *string) {
newCard, _ := newSendCard(
withHeader("🎒需要帮助吗?", larkcard.TemplateBlue),
withMainMd("**我是CZLChat-Feishu一款基于ChatGPT[模型gpt-3.5-0613]技术的智能聊天机器人**"),
withMainMd("**我是具备打字机效果的聊天机器人!**"),
withSplitLine(),
withMdAndExtraBtn(
"** 🆑 清除话题上下文**\n文本回复 *清除* 或 */clear*",
@ -677,25 +704,9 @@ func sendHelpCard(ctx context.Context,
"chatType": UserChatType,
"sessionId": *sessionId,
}, larkcard.MessageCardButtonTypeDanger)),
withSplitLine(),
withMainMd("🤖 **AI模式选择** \n"+" 文本回复 *AI模式* 或 */ai_mode*"),
withSplitLine(),
withMainMd("🛖 **内置角色列表** \n"+" 文本回复 *角色列表* 或 */roles*"),
withSplitLine(),
withMainMd("🥷 **角色扮演模式**\n文本回复*角色扮演* 或 */system*+空格+角色信息"),
withSplitLine(),
withMainMd("🎤 **AI语音对话**\n私聊模式下直接发送语音"),
withSplitLine(),
withMainMd("🎨 **图片创作模式**\n回复*图片创作* 或 */picture*"),
withSplitLine(),
withMainMd("🎰 **Token余额查询**\n回复*余额* 或 */balance*"),
withSplitLine(),
withMainMd("🔃️ **历史话题回档** 🚧\n"+" 进入话题的回复详情页,文本回复 *恢复* 或 */reload*"),
withSplitLine(),
withMainMd("📤 **话题内容导出** 🚧\n"+" 文本回复 *导出* 或 */export*"),
withSplitLine(),
withMainMd("🎰 **连续对话与多话题模式**\n"+" 点击对话框参与回复,可保持话题连贯。同时,单独提问即可开启全新新话题"),
withSplitLine(),
withMainMd("🎒 **需要更多帮助**\n文本回复 *帮助* 或 */help*"),
)
replyCard(ctx, msgId, newCard)
@ -719,24 +730,6 @@ func sendImageCard(ctx context.Context, imageKey string,
return nil
}
func sendVarImageCard(ctx context.Context, imageKey string,
msgId *string, sessionId *string) error {
newCard, _ := newSimpleSendCard(
withImageDiv(imageKey),
withSplitLine(),
//再来一张
withOneBtn(newBtn("再来一张", map[string]interface{}{
"value": imageKey,
"kind": PicVarMoreKind,
"chatType": UserChatType,
"msgId": *msgId,
"sessionId": *sessionId,
}, larkcard.MessageCardButtonTypePrimary)),
)
replyCard(ctx, msgId, newCard)
return nil
}
func sendBalanceCard(ctx context.Context, msgId *string,
balance openai.BalanceResponse) {
newCard, _ := newSendCard(
@ -769,12 +762,3 @@ func SendRoleListCard(ctx context.Context,
withNote("提醒:选择内置场景,快速进入角色扮演模式。"))
replyCard(ctx, msgId, newCard)
}
func SendAIModeListsCard(ctx context.Context,
sessionId *string, msgId *string, aiModeStrs []string) {
newCard, _ := newSendCard(
withHeader("🤖 AI模式选择", larkcard.TemplateIndigo),
withAIModeBtn(sessionId, aiModeStrs),
withNote("提醒选择内置模式让AI更好的理解您的需求。"))
replyCard(ctx, msgId, newCard)
}

View File

@ -2,34 +2,66 @@ package initialization
import (
"fmt"
"github.com/spf13/pflag"
"os"
"strconv"
"strings"
"sync"
"github.com/spf13/viper"
)
type Config struct {
FeishuAppId string
FeishuAppSecret string
FeishuAppEncryptKey string
FeishuAppVerificationToken string
FeishuBotName string
OpenaiApiKeys []string
HttpPort int
HttpsPort int
UseHttps bool
CertFile string
KeyFile string
OpenaiApiUrl string
HttpProxy string
AzureOn bool
AzureApiVersion string
AzureDeploymentName string
AzureResourceName string
AzureOpenaiToken string
// 表示配置是否已经被初始化了。
Initialized bool
EnableLog bool
FeishuAppId string
FeishuAppSecret string
FeishuAppEncryptKey string
FeishuAppVerificationToken string
FeishuBotName string
OpenaiApiKeys []string
HttpPort int
HttpsPort int
UseHttps bool
CertFile string
KeyFile string
OpenaiApiUrl string
HttpProxy string
AzureOn bool
AzureApiVersion string
AzureDeploymentName string
AzureResourceName string
AzureOpenaiToken string
AccessControlEnable bool
AccessControlMaxCountPerUserPerDay int
OpenAIHttpClientTimeOut int
OpenaiModel string
}
var (
cfg = pflag.StringP("config", "c", "./config.yaml", "apiserver config file path.")
config *Config
once sync.Once
)
/*
GetConfig will call LoadConfig once and return a global singleton, you should always use this function to get config
*/
func GetConfig() *Config {
once.Do(func() {
config = LoadConfig(*cfg)
config.Initialized = true
})
return config
}
/*
LoadConfig will load config and should only be called once, you should always use GetConfig to get config rather than
call this function directly
*/
func LoadConfig(cfg string) *Config {
viper.SetConfigFile(cfg)
viper.ReadInConfig()
@ -41,24 +73,29 @@ func LoadConfig(cfg string) *Config {
//fmt.Println(string(content))
config := &Config{
FeishuAppId: getViperStringValue("APP_ID", ""),
FeishuAppSecret: getViperStringValue("APP_SECRET", ""),
FeishuAppEncryptKey: getViperStringValue("APP_ENCRYPT_KEY", ""),
FeishuAppVerificationToken: getViperStringValue("APP_VERIFICATION_TOKEN", ""),
FeishuBotName: getViperStringValue("BOT_NAME", ""),
OpenaiApiKeys: getViperStringArray("OPENAI_KEY", nil),
HttpPort: getViperIntValue("HTTP_PORT", 9000),
HttpsPort: getViperIntValue("HTTPS_PORT", 9001),
UseHttps: getViperBoolValue("USE_HTTPS", false),
CertFile: getViperStringValue("CERT_FILE", "cert.pem"),
KeyFile: getViperStringValue("KEY_FILE", "key.pem"),
OpenaiApiUrl: getViperStringValue("API_URL", "https://oapi.czl.net"),
HttpProxy: getViperStringValue("HTTP_PROXY", ""),
AzureOn: getViperBoolValue("AZURE_ON", false),
AzureApiVersion: getViperStringValue("AZURE_API_VERSION", "2023-03-15-preview"),
AzureDeploymentName: getViperStringValue("AZURE_DEPLOYMENT_NAME", ""),
AzureResourceName: getViperStringValue("AZURE_RESOURCE_NAME", ""),
AzureOpenaiToken: getViperStringValue("AZURE_OPENAI_TOKEN", ""),
EnableLog: getViperBoolValue("ENABLE_LOG", false),
FeishuAppId: getViperStringValue("APP_ID", ""),
FeishuAppSecret: getViperStringValue("APP_SECRET", ""),
FeishuAppEncryptKey: getViperStringValue("APP_ENCRYPT_KEY", ""),
FeishuAppVerificationToken: getViperStringValue("APP_VERIFICATION_TOKEN", ""),
FeishuBotName: getViperStringValue("BOT_NAME", ""),
OpenaiApiKeys: getViperStringArray("OPENAI_KEY", nil),
HttpPort: getViperIntValue("HTTP_PORT", 9000),
HttpsPort: getViperIntValue("HTTPS_PORT", 9001),
UseHttps: getViperBoolValue("USE_HTTPS", false),
CertFile: getViperStringValue("CERT_FILE", "cert.pem"),
KeyFile: getViperStringValue("KEY_FILE", "key.pem"),
OpenaiApiUrl: getViperStringValue("API_URL", "https://api.openai.com"),
HttpProxy: getViperStringValue("HTTP_PROXY", ""),
AzureOn: getViperBoolValue("AZURE_ON", false),
AzureApiVersion: getViperStringValue("AZURE_API_VERSION", "2023-03-15-preview"),
AzureDeploymentName: getViperStringValue("AZURE_DEPLOYMENT_NAME", ""),
AzureResourceName: getViperStringValue("AZURE_RESOURCE_NAME", ""),
AzureOpenaiToken: getViperStringValue("AZURE_OPENAI_TOKEN", ""),
AccessControlEnable: getViperBoolValue("ACCESS_CONTROL_ENABLE", false),
AccessControlMaxCountPerUserPerDay: getViperIntValue("ACCESS_CONTROL_MAX_COUNT_PER_USER_PER_DAY", 0),
OpenAIHttpClientTimeOut: getViperIntValue("OPENAI_HTTP_CLIENT_TIMEOUT", 550),
OpenaiModel: getViperStringValue("OPENAI_MODEL", "gpt-3.5-turbo"),
}
return config
@ -72,8 +109,8 @@ func getViperStringValue(key string, defaultValue string) string {
return value
}
//OPENAI_KEY: sk-xxx,sk-xxx,sk-xxx
//result:[sk-xxx sk-xxx sk-xxx]
// OPENAI_KEY: sk-xxx,sk-xxx,sk-xxx
// result:[sk-xxx sk-xxx sk-xxx]
func getViperStringArray(key string, defaultValue []string) []string {
value := viper.GetString(key)
if value == "" {
@ -135,8 +172,7 @@ func (config *Config) GetKeyFile() string {
func filterFormatKey(keys []string) []string {
var result []string
for _, key := range keys {
if strings.HasPrefix(key, "sk-") || strings.HasPrefix(key,
"fk") {
if strings.HasPrefix(key, "sk-") {
result = append(result, key)
}
}

View File

@ -2,12 +2,11 @@ package initialization
import (
"errors"
"io/ioutil"
"log"
"github.com/duke-git/lancet/v2/slice"
"github.com/duke-git/lancet/v2/validator"
"gopkg.in/yaml.v2"
"io/ioutil"
"log"
)
type Role struct {

View File

@ -2,41 +2,54 @@ package main
import (
"context"
"encoding/json"
"fmt"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
larkim "github.com/larksuite/oapi-sdk-go/v3/service/im/v1"
"gopkg.in/natefinch/lumberjack.v2"
"io"
"log"
"os"
"start-feishubot/handlers"
"start-feishubot/initialization"
"start-feishubot/services/openai"
"start-feishubot/utils"
"github.com/gin-gonic/gin"
sdkginext "github.com/larksuite/oapi-sdk-gin"
larkcard "github.com/larksuite/oapi-sdk-go/v3/card"
"github.com/larksuite/oapi-sdk-go/v3/event/dispatcher"
larkim "github.com/larksuite/oapi-sdk-go/v3/service/im/v1"
"github.com/spf13/pflag"
)
var (
cfg = pflag.StringP("config", "c", "./config.yaml", "apiserver config file path.")
sdkginext "github.com/larksuite/oapi-sdk-gin"
"github.com/larksuite/oapi-sdk-go/v3/event/dispatcher"
)
func main() {
initialization.InitRoleList()
pflag.Parse()
config := initialization.LoadConfig(*cfg)
initialization.LoadLarkClient(*config)
gpt := openai.NewChatGPT(*config)
handlers.InitHandlers(gpt, *config)
globalConfig := initialization.GetConfig()
// 打印一下实际读取到的配置
globalConfigPrettyString, _ := json.MarshalIndent(globalConfig, "", " ")
log.Println(string(globalConfigPrettyString))
initialization.LoadLarkClient(*globalConfig)
gpt := openai.NewChatGPT(*globalConfig)
handlers.InitHandlers(gpt, *globalConfig)
if globalConfig.EnableLog {
logger := enableLog()
defer utils.CloseLogger(logger)
}
eventHandler := dispatcher.NewEventDispatcher(
config.FeishuAppVerificationToken, config.FeishuAppEncryptKey).
globalConfig.FeishuAppVerificationToken, globalConfig.FeishuAppEncryptKey).
OnP2MessageReceiveV1(handlers.Handler).
OnP2MessageReadV1(func(ctx context.Context, event *larkim.P2MessageReadV1) error {
return handlers.ReadHandler(ctx, event)
})
cardHandler := larkcard.NewCardActionHandler(
config.FeishuAppVerificationToken, config.FeishuAppEncryptKey,
globalConfig.FeishuAppVerificationToken, globalConfig.FeishuAppEncryptKey,
handlers.CardHandler())
r := gin.Default()
@ -51,7 +64,31 @@ func main() {
sdkginext.NewCardActionHandlerFunc(
cardHandler))
if err := initialization.StartServer(*config, r); err != nil {
err := initialization.StartServer(*globalConfig, r)
if err != nil {
log.Fatalf("failed to start server: %v", err)
}
}
func enableLog() *lumberjack.Logger {
// Set up the logger
var logger *lumberjack.Logger
logger = &lumberjack.Logger{
Filename: "logs/app.log",
MaxSize: 100, // megabytes
MaxAge: 365 * 10, // days
}
fmt.Printf("logger %T\n", logger)
// Set up the logger to write to both file and console
log.SetOutput(io.MultiWriter(logger, os.Stdout))
log.SetFlags(log.Ldate | log.Ltime)
// Write some log messages
log.Println("Starting application...")
return logger
}

View File

@ -1,7 +1,3 @@
# 可在此处提交你认为不错的角色预设,注意保持格式一致。
# PR 时的 tag 暂时集中在 [ "日常办公", "生活助手" ,"代码专家", "文案撰写"]
# 更多点子可参考我另一个参与的项目: https://open-gpt.app/
- title: 周报生成
content: 请帮我把以下的工作内容填充为一篇完整的周报,用 markdown 格式以分点叙述的形式输出:
example: 重新优化设计稿,和前端再次沟通 UI 细节,确保落地

View File

@ -0,0 +1,66 @@
package accesscontrol
import (
"start-feishubot/initialization"
"start-feishubot/utils"
"sync"
)
var accessCountMap = sync.Map{}
var currentDateFlag = ""
/*
CheckAllowAccessThenIncrement If user has accessed more than 100 times according to accessCountMap, return false.
Otherwise, return true and increase the access count by 1
*/
func CheckAllowAccessThenIncrement(userId *string) bool {
// Begin a new day, clear the accessCountMap
currentDateAsString := utils.GetCurrentDateAsString()
if currentDateFlag != currentDateAsString {
accessCountMap = sync.Map{}
currentDateFlag = currentDateAsString
}
if CheckAllowAccess(userId) {
accessedCount, ok := accessCountMap.Load(*userId)
if !ok {
accessCountMap.Store(*userId, 1)
} else {
accessCountMap.Store(*userId, accessedCount.(int)+1)
}
return true
} else {
return false
}
}
func CheckAllowAccess(userId *string) bool {
if initialization.GetConfig().AccessControlMaxCountPerUserPerDay <= 0 {
return true
}
accessedCount, ok := accessCountMap.Load(*userId)
if !ok {
accessCountMap.Store(*userId, 0)
return true
}
// If the user has accessed more than 100 times, return false
if accessedCount.(int) >= initialization.GetConfig().AccessControlMaxCountPerUserPerDay {
return false
}
// Otherwise, return true
return true
}
func GetCurrentDateFlag() string {
return currentDateFlag
}
func GetAccessCountMap() *sync.Map {
return &accessCountMap
}

View File

@ -0,0 +1,33 @@
package chatgpt
import (
"errors"
"github.com/sashabaranov/go-openai"
)
const (
ChatMessageRoleSystem = "system"
ChatMessageRoleUser = "user"
ChatMessageRoleAssistant = "assistant"
)
func CheckChatCompletionMessages(messages []openai.ChatCompletionMessage) error {
hasSystemMsg := false
for _, msg := range messages {
if msg.Role != ChatMessageRoleSystem && msg.Role != ChatMessageRoleUser && msg.Role != ChatMessageRoleAssistant {
return errors.New("invalid message role")
}
if msg.Role == ChatMessageRoleSystem {
if hasSystemMsg {
return errors.New("more than one system message")
}
hasSystemMsg = true
} else {
// 对于非 system 角色的消息Content 不能为空
if msg.Content == "" {
return errors.New("empty content in non-system message")
}
}
}
return nil
}

View File

@ -0,0 +1,90 @@
package chatgpt
import (
"context"
"errors"
"fmt"
"github.com/sashabaranov/go-openai"
"io"
"start-feishubot/initialization"
customOpenai "start-feishubot/services/openai"
)
type Messages struct {
Role string `json:"role"`
Content string `json:"content"`
}
type ChatGPT struct {
config *initialization.Config
}
type Gpt3 interface {
StreamChat() error
StreamChatWithHistory() error
}
func NewGpt3(config *initialization.Config) *ChatGPT {
return &ChatGPT{config: config}
}
func (c *ChatGPT) StreamChat(ctx context.Context,
msg []customOpenai.Messages,
responseStream chan string) error {
//change msg type from Messages to openai.ChatCompletionMessage
chatMsgs := make([]openai.ChatCompletionMessage, len(msg))
for i, m := range msg {
chatMsgs[i] = openai.ChatCompletionMessage{
Role: m.Role,
Content: m.Content,
}
}
return c.StreamChatWithHistory(ctx, chatMsgs, 2000,
responseStream)
}
func (c *ChatGPT) StreamChatWithHistory(ctx context.Context, msg []openai.ChatCompletionMessage, maxTokens int,
responseStream chan string,
) error {
config := openai.DefaultConfig(c.config.OpenaiApiKeys[0])
config.BaseURL = c.config.OpenaiApiUrl + "/v1"
proxyClient, parseProxyError := customOpenai.GetProxyClient(c.config.HttpProxy)
if parseProxyError != nil {
return parseProxyError
}
config.HTTPClient = proxyClient
client := openai.NewClientWithConfig(config)
//pp.Printf("client: %v", client)
req := openai.ChatCompletionRequest{
Model: c.config.OpenaiModel,
Messages: msg,
N: 1,
Temperature: 0.7,
MaxTokens: maxTokens,
TopP: 1,
//Moderation: true,
//ModerationStop: true,
}
stream, err := client.CreateChatCompletionStream(ctx, req)
if err != nil {
fmt.Errorf("CreateCompletionStream returned error: %v", err)
}
defer stream.Close()
for {
response, err := stream.Recv()
if errors.Is(err, io.EOF) {
//fmt.Println("Stream finished")
return nil
}
if err != nil {
fmt.Printf("Stream error: %v\n", err)
return err
}
responseStream <- response.Choices[0].Delta.Content
}
return nil
}

View File

@ -0,0 +1,62 @@
package chatgpt
import (
"context"
"fmt"
"start-feishubot/initialization"
"start-feishubot/services/openai"
"testing"
"time"
)
func TestChatGPT_streamChat(t *testing.T) {
// 初始化配置
config := initialization.LoadConfig("../../config.yaml")
// 准备测试用例
testCases := []struct {
msg []openai.Messages
wantOutput string
wantErr bool
}{
{
msg: []openai.Messages{
{
Role: "system",
Content: "从现在起你要化身职场语言大师,你需要用婉转的方式回复老板想你提出的问题,或像领导提出请求。",
},
{
Role: "user",
Content: "领导,我想请假一天",
},
},
wantOutput: "",
wantErr: false,
},
}
// 执行测试用例
for _, tc := range testCases {
// 准备输入和输出
responseStream := make(chan string)
ctx := context.Background()
c := &ChatGPT{config: config}
// 启动一个协程来模拟流式聊天
go func() {
err := c.StreamChat(ctx, tc.msg, responseStream)
if err != nil {
t.Errorf("streamChat() error = %v, wantErr %v", err, tc.wantErr)
}
}()
// 等待输出并检查是否符合预期
select {
case gotOutput := <-responseStream:
fmt.Printf("gotOutput: %v\n", gotOutput)
case <-time.After(5 * time.Second):
t.Errorf("streamChat() timeout, expected output not received")
}
}
}

View File

@ -0,0 +1,20 @@
package chatgpt
import (
"github.com/pandodao/tokenizer-go"
"github.com/sashabaranov/go-openai"
"strings"
)
func CalcTokenLength(text string) int {
text = strings.TrimSpace(text)
return tokenizer.MustCalToken(text)
}
func CalcTokenFromMsgList(msgs []openai.ChatCompletionMessage) int {
var total int
for _, msg := range msgs {
total += CalcTokenLength(msg.Content)
}
return total
}

View File

@ -0,0 +1,50 @@
package chatgpt
import "testing"
func TestCalcTokenLength(t *testing.T) {
type args struct {
text string
}
tests := []struct {
name string
args args
want int
}{
{
name: "eng",
args: args{
text: "hello world",
},
want: 2,
},
{
name: "cn",
args: args{
text: "我和我的祖国",
},
want: 13,
},
{
name: "empty",
args: args{
text: "",
},
want: 0,
},
{
name: "empty",
args: args{
text: " ",
},
want: 0,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := CalcTokenLength(tt.args.text); got != tt.want {
t.Errorf("CalcTokenLength() = %v, want %v", got, tt.want)
}
})
}
}

View File

@ -1,9 +1,8 @@
package services
import (
"time"
"github.com/patrickmn/go-cache"
"time"
)
type MsgService struct {

View File

@ -6,12 +6,23 @@ import (
"time"
)
type BillingSubScrip struct {
HardLimitUsd float64 `json:"hard_limit_usd"`
AccessUntil float64 `json:"access_until"`
}
type BillingUsage struct {
TotalUsage float64 `json:"total_usage"`
//https://api.openai.com/dashboard/billing/credit_grants
type Billing struct {
Object string `json:"object"`
TotalGranted float64 `json:"total_granted"`
TotalUsed float64 `json:"total_used"`
TotalAvailable float64 `json:"total_available"`
Grants struct {
Object string `json:"object"`
Data []struct {
Object string `json:"object"`
ID string `json:"id"`
GrantAmount float64 `json:"grant_amount"`
UsedAmount float64 `json:"used_amount"`
EffectiveAt float64 `json:"effective_at"`
ExpiresAt float64 `json:"expires_at"`
} `json:"data"`
} `json:"grants"`
}
type BalanceResponse struct {
@ -23,47 +34,29 @@ type BalanceResponse struct {
}
func (gpt *ChatGPT) GetBalance() (*BalanceResponse, error) {
fmt.Println("进入")
var data1 BillingSubScrip
var data Billing
err := gpt.sendRequestWithBodyType(
gpt.ApiUrl+"/v1/dashboard/billing/subscription",
gpt.ApiUrl+"/dashboard/billing/credit_grants",
http.MethodGet,
nilBody,
nil,
&data1,
&data,
)
fmt.Println("出错1", err)
if err != nil {
return nil, fmt.Errorf("failed to get billing subscription: %v", err)
}
nowdate := time.Now()
enddate := nowdate.Format("2006-01-02")
startdate := nowdate.AddDate(0, 0, -100).Format("2006-01-02")
var data2 BillingUsage
err = gpt.sendRequestWithBodyType(
gpt.ApiUrl+fmt.Sprintf("/v1/dashboard/billing/usage?start_date=%s&end_date=%s", startdate, enddate),
http.MethodGet,
nilBody,
nil,
&data2,
)
fmt.Println(data2)
fmt.Println("出错2", err)
if err != nil {
return nil, fmt.Errorf("failed to get billing subscription: %v", err)
return nil, fmt.Errorf("failed to get billing data: %v", err)
}
balance := &BalanceResponse{
TotalGranted: data1.HardLimitUsd,
TotalUsed: data2.TotalUsage / 100,
TotalAvailable: data1.HardLimitUsd - data2.TotalUsage/100,
TotalGranted: data.TotalGranted,
TotalUsed: data.TotalUsed,
TotalAvailable: data.TotalAvailable,
ExpiresAt: time.Now(),
EffectiveAt: time.Now(),
}
if data1.AccessUntil > 0 {
balance.EffectiveAt = time.Now()
balance.ExpiresAt = time.Unix(int64(data1.AccessUntil), 0)
if len(data.Grants.Data) > 0 {
balance.EffectiveAt = time.Unix(int64(data.Grants.Data[0].EffectiveAt), 0)
balance.ExpiresAt = time.Unix(int64(data.Grants.Data[0].ExpiresAt), 0)
}
return balance, nil

View File

@ -9,11 +9,10 @@ import (
"mime/multipart"
"net/http"
"net/url"
"strings"
"time"
"start-feishubot/initialization"
"start-feishubot/services/loadbalancer"
"strings"
"time"
)
type PlatForm string
@ -38,6 +37,7 @@ type ChatGPT struct {
Lb *loadbalancer.LoadBalancer
ApiKey []string
ApiUrl string
ApiModel string
HttpProxy string
Platform PlatForm
AzureConfig AzureConfig
@ -48,7 +48,7 @@ const (
jsonBody requestBodyType = iota
formVoiceDataBody
formPictureDataBody
streamBody
nilBody
)
@ -91,6 +91,7 @@ func (gpt *ChatGPT) doAPIRequestWithRetry(url, method string,
return err
}
requestBodyData = formBody.Bytes()
case nilBody:
requestBodyData = nil
@ -111,6 +112,11 @@ func (gpt *ChatGPT) doAPIRequestWithRetry(url, method string,
if bodyType == formVoiceDataBody || bodyType == formPictureDataBody {
req.Header.Set("Content-Type", writer.FormDataContentType())
}
if bodyType == streamBody {
req.Header.Set("Accept", "text/event-stream")
req.Header.Set("Connection", "keep-alive")
req.Header.Set("Cache-Control", "no-cache")
}
if gpt.Platform == OpenAI {
req.Header.Set("Authorization", "Bearer "+api.Key)
} else {
@ -120,10 +126,6 @@ func (gpt *ChatGPT) doAPIRequestWithRetry(url, method string,
var response *http.Response
var retry int
for retry = 0; retry <= maxRetries; retry++ {
// set body
if retry > 0 {
req.Body = ioutil.NopCloser(bytes.NewReader(requestBodyData))
}
response, err = client.Do(req)
//fmt.Println("--------------------")
//fmt.Println("req", req.Header)
@ -135,7 +137,7 @@ func (gpt *ChatGPT) doAPIRequestWithRetry(url, method string,
fmt.Println("body", string(body))
gpt.Lb.SetAvailability(api.Key, false)
if retry == maxRetries {
if retry == maxRetries || bodyType == streamBody {
break
}
time.Sleep(time.Duration(retry+1) * time.Second)
@ -169,27 +171,38 @@ func (gpt *ChatGPT) sendRequestWithBodyType(link, method string,
bodyType requestBodyType,
requestBody interface{}, responseBody interface{}) error {
var err error
client := &http.Client{Timeout: 110 * time.Second}
if gpt.HttpProxy == "" {
err = gpt.doAPIRequestWithRetry(link, method, bodyType,
requestBody, responseBody, client, 3)
proxyString := gpt.HttpProxy
client, parseProxyError := GetProxyClient(proxyString)
if parseProxyError != nil {
return parseProxyError
}
err = gpt.doAPIRequestWithRetry(link, method, bodyType,
requestBody, responseBody, client, 3)
return err
}
func GetProxyClient(proxyString string) (*http.Client, error) {
var client *http.Client
timeOutDuration := time.Duration(initialization.GetConfig().OpenAIHttpClientTimeOut) * time.Second
if proxyString == "" {
client = &http.Client{Timeout: timeOutDuration}
} else {
proxyUrl, err := url.Parse(gpt.HttpProxy)
proxyUrl, err := url.Parse(proxyString)
if err != nil {
return err
return nil, err
}
transport := &http.Transport{
Proxy: http.ProxyURL(proxyUrl),
}
proxyClient := &http.Client{
client = &http.Client{
Transport: transport,
Timeout: 110 * time.Second,
Timeout: timeOutDuration,
}
err = gpt.doAPIRequestWithRetry(link, method, bodyType,
requestBody, responseBody, proxyClient, 3)
}
return err
return client, nil
}
func NewChatGPT(config initialization.Config) *ChatGPT {
@ -212,6 +225,7 @@ func NewChatGPT(config initialization.Config) *ChatGPT {
ApiUrl: config.OpenaiApiUrl,
HttpProxy: config.HttpProxy,
Platform: platform,
ApiModel: config.OpenaiModel,
AzureConfig: AzureConfig{
BaseURL: AzureApiUrlV1,
ResourceName: config.AzureResourceName,

View File

@ -2,13 +2,8 @@ package openai
import (
"errors"
"strings"
"github.com/pandodao/tokenizer-go"
)
type AIMode float64
const (
Fresh AIMode = 0.1
Warmth AIMode = 0.4
@ -49,7 +44,6 @@ type ChatGPTResponseBody struct {
Choices []ChatGPTChoiceItem `json:"choices"`
Usage map[string]interface{} `json:"usage"`
}
type ChatGPTChoiceItem struct {
Message Messages `json:"message"`
Index int `json:"index"`
@ -61,24 +55,20 @@ type ChatGPTRequestBody struct {
Model string `json:"model"`
Messages []Messages `json:"messages"`
MaxTokens int `json:"max_tokens"`
Temperature AIMode `json:"temperature"`
Temperature float32 `json:"temperature"`
TopP int `json:"top_p"`
FrequencyPenalty int `json:"frequency_penalty"`
PresencePenalty int `json:"presence_penalty"`
Stream bool `json:"stream" default:"false"`
}
func (msg *Messages) CalculateTokenLength() int {
text := strings.TrimSpace(msg.Content)
return tokenizer.MustCalToken(text)
}
func (gpt *ChatGPT) Completions(msg []Messages, aiMode AIMode) (resp Messages,
func (gpt *ChatGPT) Completions(msg []Messages) (resp Messages,
err error) {
requestBody := ChatGPTRequestBody{
Model: engine,
Model: gpt.ApiModel,
Messages: msg,
MaxTokens: maxTokens,
Temperature: aiMode,
Temperature: temperature,
TopP: 1,
FrequencyPenalty: 0,
PresencePenalty: 0,

View File

@ -2,9 +2,8 @@ package openai
import (
"fmt"
"testing"
"start-feishubot/initialization"
"testing"
)
func TestCompletions(t *testing.T) {
@ -17,7 +16,7 @@ func TestCompletions(t *testing.T) {
gpt := NewChatGPT(*config)
resp, err := gpt.Completions(msgs, Balance)
resp, err := gpt.Completions(msgs)
if err != nil {
t.Errorf("TestCompletions failed with error: %v", err)
}

View File

@ -1,6 +1,7 @@
package services
import (
"encoding/json"
"start-feishubot/services/openai"
"time"
@ -20,7 +21,6 @@ type SessionMeta struct {
Mode SessionMode `json:"mode"`
Msg []openai.Messages `json:"msg,omitempty"`
PicSetting PicSetting `json:"pic_setting,omitempty"`
AIMode openai.AIMode `json:"ai_mode,omitempty"`
}
const (
@ -35,14 +35,10 @@ const (
)
type SessionServiceCacheInterface interface {
Get(sessionId string) *SessionMeta
Set(sessionId string, sessionMeta *SessionMeta)
GetMsg(sessionId string) []openai.Messages
SetMsg(sessionId string, msg []openai.Messages)
SetMode(sessionId string, mode SessionMode)
GetMode(sessionId string) SessionMode
GetAIMode(sessionId string) openai.AIMode
SetAIMode(sessionId string, aiMode openai.AIMode)
SetPicResolution(sessionId string, resolution Resolution)
GetPicResolution(sessionId string) string
Clear(sessionId string)
@ -50,22 +46,6 @@ type SessionServiceCacheInterface interface {
var sessionServices *SessionService
// implement Get interface
func (s *SessionService) Get(sessionId string) *SessionMeta {
sessionContext, ok := s.cache.Get(sessionId)
if !ok {
return nil
}
sessionMeta := sessionContext.(*SessionMeta)
return sessionMeta
}
// implement Set interface
func (s *SessionService) Set(sessionId string, sessionMeta *SessionMeta) {
maxCacheTime := time.Hour * 12
s.cache.Set(sessionId, sessionMeta, maxCacheTime)
}
func (s *SessionService) GetMode(sessionId string) SessionMode {
// Get the session mode from the cache.
sessionContext, ok := s.cache.Get(sessionId)
@ -89,29 +69,6 @@ func (s *SessionService) SetMode(sessionId string, mode SessionMode) {
s.cache.Set(sessionId, sessionMeta, maxCacheTime)
}
func (s *SessionService) GetAIMode(sessionId string) openai.AIMode {
sessionContext, ok := s.cache.Get(sessionId)
if !ok {
return openai.Balance
}
sessionMeta := sessionContext.(*SessionMeta)
return sessionMeta.AIMode
}
// SetAIMode set the ai mode for the session.
func (s *SessionService) SetAIMode(sessionId string, aiMode openai.AIMode) {
maxCacheTime := time.Hour * 12
sessionContext, ok := s.cache.Get(sessionId)
if !ok {
sessionMeta := &SessionMeta{AIMode: aiMode}
s.cache.Set(sessionId, sessionMeta, maxCacheTime)
return
}
sessionMeta := sessionContext.(*SessionMeta)
sessionMeta.AIMode = aiMode
s.cache.Set(sessionId, sessionMeta, maxCacheTime)
}
func (s *SessionService) GetMsg(sessionId string) (msg []openai.Messages) {
sessionContext, ok := s.cache.Get(sessionId)
if !ok {
@ -189,7 +146,8 @@ func GetSessionCache() SessionServiceCacheInterface {
func getStrPoolTotalLength(strPool []openai.Messages) int {
var total int
for _, v := range strPool {
total += v.CalculateTokenLength()
bytes, _ := json.Marshal(v)
total += len(string(bytes))
}
return total
}

12
code/utils/commonUtils.go Normal file
View File

@ -0,0 +1,12 @@
package utils
import (
"time"
)
func GetCurrentDateAsString() string {
return time.Now().Format("2006-01-02")
// 本地测试可以用这个。将1天缩短到10秒。
//return strconv.Itoa((time.Now().Second() + 100000) / 10)
}

24
code/utils/logUtils.go Normal file
View File

@ -0,0 +1,24 @@
package utils
import (
"fmt"
"gopkg.in/natefinch/lumberjack.v2"
"log"
"time"
)
type MyLogWriter struct {
}
func (writer MyLogWriter) Write(bytes []byte) (int, error) {
return fmt.Print(time.Now().UTC().Format("2006-01-02T15:04:05.999Z") + string(bytes))
}
func CloseLogger(logger *lumberjack.Logger) {
err := logger.Close()
if err != nil {
log.Println(err)
} else {
log.Println("logger closed")
}
}

View File

@ -1,21 +1,26 @@
version: '3.3'
services:
feishu-chatgpt:
container_name: feishu-chatgpt
container_name: Feishu-OpenAI-Stream-Chatbot
build:
context: .
dockerfile: Dockerfile
ports:
- "9000:9000/tcp"
# volumes:
# - ./code/config.yaml:/app/config.yaml:ro
volumes:
# - ./code/config.yaml:/app/config.yaml:ro
# 要注意,这里右边的容器内的路径,不是从根目录开始的,要参考 dockerfile 中的 WORKDIR
- ./logs:/app/logs
environment:
################ 以下配置建议和 config.example.yaml 里面的配置综合起来看 ################
# 日志配置, 默认不开启, 可以开启后查看日志
- ENABLE_LOG=false
- APP_ID=cli_axxx
- APP_SECRET=xxx
- APP_ENCRYPT_KEY=xxx
- APP_VERIFICATION_TOKEN=xxx
# 请确保和飞书应用管理平台中的设置一致
- BOT_NAME=chatGpt
- BOT_NAME=xxx
# OpenAI API Key 支持负载均衡, 可以填写多个 Key 用逗号分隔
- OPENAI_KEY=sk-xxx,sk-xxx,sk-xxx
# 服务器配置
@ -25,6 +30,13 @@ services:
- CERT_FILE=cert.pem
- KEY_FILE=key.pem
# OpenAI 地址, 一般不需要修改, 除非你有自己的反向代理
- API_URL=https://oapi.czl.net
- API_URL=https://api.openai.com
# 代理设置, 例如 - HTTP_PROXY=http://127.0.0.1:7890, 默认代表不使用代理
- HTTP_PROXY
## 访问控制
# 是否启用访问控制。默认不启用。
- ACCESS_CONTROL_ENABLE=false
# 每个用户每天最多问多少个问题。默认为0. 配置成为小于等于0表示不限制。
- ACCESS_CONTROL_MAX_COUNT_PER_USER_PER_DAY=0
# 访问OpenAi的 普通 Http请求的超时时间单位秒不配置的话默认为 550 秒
- OPENAI_HTTP_CLIENT_TIMEOUT

Binary file not shown.

Before

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 204 KiB

2
s.yaml
View File

@ -28,7 +28,7 @@ services:
name: "feishu-chatgpt"
description: 'a simple feishubot by serverless devs'
codeUri: './code'
caPort: 9000
cAPort: 9000
customRuntimeConfig:
command:
- ./target/main