# 嵌入式班级管理系统微服务版 **Repository Path**: yuxiang_zeng/class_system_microservice ## Basic Information - **Project Name**: 嵌入式班级管理系统微服务版 - **Description**: 此项目是为了实现一站式自动化办公的系统,为了解决管理嵌入式班级历届以来对于班级事务没有统一管理的弊端,比如成绩凌乱,考勤需要人工统计等问题,现架构出此项目;并且此项目为了解决单点故障的问题,做了一系列集群的部署,利用微服务架构可以很好的对此项目进行扩展,达到更新不停机的效果; - **Primary Language**: Java - **License**: GPL-3.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 9 - **Forks**: 0 - **Created**: 2021-11-06 - **Last Updated**: 2023-09-11 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # 嵌入式班级管理系统(微服务版) 此项目是为了实现一站式自动化办公的系统,为了解决管理嵌入式班级历届以来对于班级事务没有统一管理的弊端,比如成绩凌乱,考勤需要人工统计等问题,现架构出此项目;并且此项目为了解决单点故障的问题,做了一系列集群的部署,利用微服务架构可以很好的对此项目进行扩展,达到更新不停机的效果; **所用技术栈:SpringBoot、Mybatis Plus、Spring Cloud Alibaba、Nacos、OpenFeign、Sentinel、Seata、Redis、Mysql、nginx、OneNet、Docker** ## 一、服务的划分 根据目前班级所做业务对服务进行划分 - **服务网关(2999)**:用于做服务的统一访问门户,对请求进行路由 - **用户服务(2998)**:用于对用户进行增加和删除,查看用户的信息 - **权限服务(2997)**:用于对登录进系统的用户进行权限的认证与授权 - **学生管理服务(2996)**:用于对学生信息进行管理 - **教师管理服务(2995)**:用于对教师信息进行统一管理 - **报名管理服务(2994)**:用于对每一届报名时提供服务,包括报名信息的填写以及报名的录取服务等 - **协会管理服务(2993)**:用于对协会成员的在会信息以及考勤记录进行管理 - **项目管理服务(2992)**:用于管理嵌入式的项目分配与项目进度的跟进,以及对于周报和日志进行相关的管控 - **文件管理服务(2991)**:对于上传进系统的文件进行统一编排管理 - **班级信息管理服务(2990)**:对于班级信息介绍和宣传信息进行统一的管理 - **新闻热点管理服务(2889)**:对于新闻的热点进行统一管理 - **课程管理服务(2888)**:对于班级的课程进行一系列管理 - **成绩管理系统(2887)**:对于班级学生成绩进行管理,并做一些统计分析 - **前台管理系统(2886)**:对于前台显示内容做相关管理 - **签到管理系统(2885)**:对于班级签到信息联合打卡机做一些统计分析 - **班费管理系统(2884)**:对于班费的信息进行统一管理,其中还包含缴纳班费以及支付和核账等功能 - **埋点服务(2883)**:对于每次登陆系统的用户所做的行为进行监控的统计以及存储 - **配置服务(2882)**:保存系统说需要的配置信息 ## 二、架构图 ![项目微服务架构图 (1)](http://121.43.130.206:82/%E9%A1%B9%E7%9B%AE%E5%BE%AE%E6%9C%8D%E5%8A%A1%E6%9E%B6%E6%9E%84%E5%9B%BE%20%281%29.png) ## 三、集群式的分布式的Nacos部署(利用docker) 1、新建db与用户 ```sql create database nacos default character set = utf8; create user 'nacos'@'%' identified by '123456'; grant all privileges on nacos.* to 'nacos'@'%'; ``` 2、在建立数据库中执行sql,脚本如下 [nacos.sql](http://121.43.130.206:82/nacos.sql) 3.执行docker命令 拉取镜像 ```shel docker pull nacos/nacos-server ``` 设置数据库并启动 ```shell docker run -d \ -e PREFER_HOST_MODE=hostname \ -e MODE=standalone \ -e SPRING_DATASOURCE_PLATFORM=mysql \ -e MYSQL_SERVICE_HOST={mysqlIp} \ -e MYSQL_SERVICE_PORT={mysqlPort} \ -e MYSQL_SERVICE_USER={mysql用户名} \ -e MYSQL_SERVICE_PASSWORD={mysql用户密码} \ -e MYSQL_SERVICE_DB_NAME={mysql数据库名} \ -p 8848:8848 \ --name nacos-standalone-mysql \ --restart=always \ nacos/nacos-server ``` 完成你多个主机的启动以后,下面配置nginx,利用以下配置 ```nginx upstream nacos { server 主机1IP:8848; server 主机2IP:8849; server 主机3IP:8850; } server { listen 你想要开放的端口; server_name nginx所在主机IP; location /{ proxy_pass http://nacos; } } ``` 这就搭建好了! ## 四、部署seata集群 拉取镜像 ```shell docker pull seataio/seata-server ``` 创建seata-config目录,并在该目录下边新建registry.conf、file.conf文件。 registry.conf内容如下: ```config registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" nacos { serverAddr = "你的nacos地址" namespace = "public" cluster = "default" } eureka { serviceUrl = "http://localhost:8761/eureka" application = "default" weight = "1" } redis { serverAddr = "localhost:6379" db = "0" } zk { cluster = "default" serverAddr = "127.0.0.1:2181" session.timeout = 6000 connect.timeout = 2000 } consul { cluster = "default" serverAddr = "127.0.0.1:8500" } etcd3 { cluster = "default" serverAddr = "http://localhost:2379" } sofa { serverAddr = "127.0.0.1:9603" application = "default" region = "DEFAULT_ZONE" datacenter = "DefaultDataCenter" cluster = "default" group = "SEATA_GROUP" addressWaitTime = "3000" } file { name = "file.conf" } } config { # file、nacos 、apollo、zk、consul、etcd3 type = "file" file { name = "file.conf" } } ``` file.conf文件内容如下 ```config transport { # tcp udt unix-domain-socket type = "TCP" #NIO NATIVE server = "NIO" #enable heartbeat heartbeat = true #thread factory for netty thread-factory { boss-thread-prefix = "NettyBoss" worker-thread-prefix = "NettyServerNIOWorker" server-executor-thread-prefix = "NettyServerBizHandler" share-boss-worker = false client-selector-thread-prefix = "NettyClientSelector" client-selector-thread-size = 1 client-worker-thread-prefix = "NettyClientWorkerThread" # netty boss thread size,will not be used for UDT boss-thread-size = 1 #auto default pin or 8 worker-thread-size = 8 } shutdown { # when destroy server, wait seconds wait = 3 } serialization = "seata" compressor = "none" } service { vgroup_mapping.fsp_tx_group = "xxx_tx_group" #修改自定义事务组名称 default.grouplist = "主机IP:8091" enableDegrade = false disable = false max.commit.retry.timeout = "-1" max.rollback.retry.timeout = "-1" disableGlobalTransaction = false } client { async.commit.buffer.limit = 10000 lock { retry.internal = 10 retry.times = 30 } report.retry.count = 5 tm.commit.retry.count = 1 tm.rollback.retry.count = 1 } ## transaction log store store { ## store mode: file、db mode = "db" ## file store file { dir = "sessionStore" # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions max-branch-session-size = 16384 # globe session size , if exceeded throws exceptions max-global-session-size = 512 # file buffer size , if exceeded allocate new buffer file-write-buffer-cache-size = 16384 # when recover batch read size session.reload.read_size = 100 # async, sync flush-disk-mode = async } ## database store db { ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc. datasource = "druid" ## mysql/oracle/h2/oceanbase etc. db-type = "mysql" driver-class-name = "com.mysql.jdbc.Driver" url = "jdbc:mysql://你的数据库地址:数据库端口号/seata" user = "登陆名" password = "密码" min-conn = 1 max-conn = 3 global.table = "global_table" branch.table = "branch_table" lock-table = "lock_table" query-limit = 100 } } lock { ## the lock store mode: local、remote mode = "remote" local { ## store locks in user's database } remote { ## store locks in the seata's server } } recovery { #schedule committing retry period in milliseconds committing-retry-period = 1000 #schedule asyn committing retry period in milliseconds asyn-committing-retry-period = 1000 #schedule rollbacking retry period in milliseconds rollbacking-retry-period = 1000 #schedule timeout retry period in milliseconds timeout-retry-period = 1000 } transaction { undo.data.validation = true undo.log.serialization = "jackson" undo.log.save.days = 7 #schedule delete expired undo_log in milliseconds undo.log.delete.period = 86400000 undo.log.table = "undo_log" } ## metrics settings metrics { enabled = false registry-type = "compact" # multi exporters use comma divided exporter-list = "prometheus" exporter-prometheus-port = 9898 } support { ## spring spring { # auto proxy the DataSource bean datasource.autoproxy = false } } ``` **将这两个文件放在opt目录下** 创建seata数据库,执行以下脚本 ```sql CREATE TABLE IF NOT EXISTS `global_table` ( `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `status` TINYINT NOT NULL, `application_id` VARCHAR(32), `transaction_service_group` VARCHAR(32), `transaction_name` VARCHAR(128), `timeout` INT, `begin_time` BIGINT, `application_data` VARCHAR(2000), `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`xid`), KEY `idx_gmt_modified_status` (`gmt_modified`, `status`), KEY `idx_transaction_id` (`transaction_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8; -- the table to store BranchSession data CREATE TABLE IF NOT EXISTS `branch_table` ( `branch_id` BIGINT NOT NULL, `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `resource_group_id` VARCHAR(32), `resource_id` VARCHAR(256), `branch_type` VARCHAR(8), `status` TINYINT, `client_id` VARCHAR(64), `application_data` VARCHAR(2000), `gmt_create` DATETIME(6), `gmt_modified` DATETIME(6), PRIMARY KEY (`branch_id`), KEY `idx_xid` (`xid`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8; -- the table to store lock data CREATE TABLE IF NOT EXISTS `lock_table` ( `row_key` VARCHAR(128) NOT NULL, `xid` VARCHAR(96), `transaction_id` BIGINT, `branch_id` BIGINT NOT NULL, `resource_id` VARCHAR(256), `table_name` VARCHAR(32), `pk` VARCHAR(36), `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`row_key`), KEY `idx_branch_id` (`branch_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8; ``` 启动docker容器 ```docker docker run -d --name seata-server \ -p 8091:8091 \ -v /opt/seata-config/file.conf:/seata-server/resources/file.conf \ -v /opt/seata-config/registry.conf:/seata-server/resources/registry.conf \ seataio/seata-server ``` 其余机子同理