zwillon

zwillon 查看完整档案

北京编辑中国科学技术大学  |  计算机 编辑  |  填写所在公司/组织填写个人主网站
编辑
_ | |__ _ _ __ _ | '_ \| | | |/ _` | | |_) | |_| | (_| | |_.__/ \__,_|\__, | |___/ 该用户太懒什么也没留下

个人动态

zwillon 赞了文章 · 2019-03-12

使用Kotlin + SpringBoot + JPA 进行web开发极简教程

使用Kotlin + SpringBoot + JPA 进行web开发极简教程

开始前需要有java基础, SpringBoot基础和kotlin基础

kotlin参考kotlin中文站的教程, 相信对于一个Java程序员来说, 半天就能上手了

为什么选择Kotlin

Kotlin比起java来说更加简洁, 和java一样是基于JVM的编程语言, 网上关于Kotlin优点缺点的讨论也有很多, 这里就不展开了.

当然, 最主要的原因的是, 暑假实习的公司, 使用Kotlin和SpringBOot进行web开发的, o(╯□╰)o

之前对于kotlin的了解甚少, 只知道在去年的google I/O大会上成了安卓的第一语言, 其他就不了解了.
现在趁暑假前, 先学习一下kotlin.

教程开始

创建一个SpringBoot工程

首先当然是使用IDEA创建一个SpringBoot工程

这里语言选择Kotlin, 使用gradle进行管理, 之后再选择模块的时候只要选择上web, jpa和mysql就可以了

然后修改配置文件, 因为导入了jpa, 所以一定要设置好DataSource, 否则无法启动

spring:
  datasource:
    driver-class-name: com.mysql.jdbc.Driver
    username: root
    password: ABCabc123#
    url: jdbc:mysql://localhost:3306/db_test?useSSL=false

配置完成后可以, 在source目录下可以看到已经创建好了一个Application.kt文件, 用于启动SpringBoot, 对应Java下的Application.java
文件

@SpringBootApplication
class TestApplication

fun main(args: Array<String>) {
    runApplication<TestApplication>(*args)
}

创建Controller

@RestController
@RequestMapping("/hello")
class HelloController {
    @GetMapping
    fun hello():String {
        return "hello world"
    }
}

和java的写法非常像, 可以无缝转换

启动!

使用curl命令进行请求

➜  ~ curl "http://localhost:8080/hello"
hello world                                                          

简单的请求完成了

使用Swagger2生成接口文档

使用Swagger2可以自动生成接口文档和进行接口测试, 极大的方便了后端, 不需要去花很大的功夫获去维护文档

首先试试导入Swagger2

    compile group: 'io.springfox', name: 'springfox-swagger2', version: '2.8.0'
    compile group: 'io.springfox', name: 'springfox-swagger-ui', version: '2.8.0'

配置Swagger2

@Configuration
@EnableSwagger2
class Swagger2 {

    @Bean
    fun createRestApi(): Docket {
        return Docket(DocumentationType.SWAGGER_2)  // 使用Swagger2
                .apiInfo(apiInfo())                 // 设置接口页面信息
                .select()                           // 返回ApiSelectorBuilder的实例
                .apis(RequestHandlerSelectors.basePackage("io.ride.vote.web"))      // api接口所在的包
                .paths(PathSelectors.any())         
                .build()
    }

    /**
     * 页面信息展示
     */
    private fun apiInfo(): ApiInfo {
        return ApiInfoBuilder()
                .title("Vote RestFul APIs文档")
                .description("项目API接口文档")
                .contact(Contact("ride", "", "supreDong@gamil.com"))
                .version("0.0.1")
                .build()
    }
}

@Configuration注解表明这是一个配置类, @EnableSwagger2注解表明启用Swagger2

通过在controller中添加注解来生成api文档

@Api(value = "测试", description = "测试控制器")
@RestController
@RequestMapping("/hello")
class HelloController {

    @GetMapping
    @ApiOperation("你好!世界!", notes = "返回hello world")
    fun hello(): String {
        return "hello world"
    }
}

之后打开http://localhost:8080/swagger-ui.html可以看到生成的接口信息
如图, 在该页面上还以对接口进行测试

统一异常处理

和java下的操作是一致的, 只是把java翻译成了kotlin

@ControllerAdvice
class CustomExceptionHandler {

    @ExceptionHandler(ApiException::class)
    fun handlerApiException(e: ApiException): ResponseEntity<Result> {
        val result = Result(e.code, e.data)
        return result.ok()

    }

    @ExceptionHandler(MissingServletRequestParameterException::class)
    fun handMissingServletRequestParameterException(e: MissingServletRequestParameterException): ResponseEntity<Result> {

        val result = Result(HttpStatus.BAD_REQUEST.value(), e.message)
        return result.ok()
    }

}

class ApiException(val code: ResultCode, val data: HashMap<String, Any>? = null) : RuntimeException(code.msg)

使用JPA

首先配置JPA:

spring:
  jpa:
    show-sql: true
    hibernate:
      ddl-auto: update
    database: mysql

创建data类

@Entity
@Table(name = "t_user")
data class User(
        @Id
        @GeneratedValue(strategy = GenerationType.IDENTITY)
        var id: Long = -1,
        @Column(nullable = false)
        var username: String? = null,
        @Column(nullable = false)
        var password: String? = null,
        @Column(nullable = false)
        var email: String? = null,
        @Column(nullable = true)
        var nickname: String? = null,
        @Column(nullable = false)
        var createTime: Date = Date()
)

创建repository类

interface IUserService {
    /**
     * 添加一个用户
     */
    fun addUser(user: User): User

    /**
     * 展示所有用户
     */
    fun listAll(): List<User>

    /**
     * 删除一个用户
     */
    fun deleteUser(id: Long)

}

进行单元测试

@RunWith(SpringRunner::class)
@SpringBootTest
class UserRepositoryTest {

    @Autowired
    private lateinit var userRepository: UserRepository

    @Test
    fun `find all user test`() {
        println(userRepository.findAll())
    }

    @Test
    fun `add user test`() {
        val user = User(username = "ride", email = "supreDong@gmail.com", password = "123123", nickname = "ride")
        println(userRepository.save(user))
    }

    @Test
    fun `delete user test`() {
        val user = userRepository.findById(1)
        println(user.orElse(null))
        if (user.isPresent)
            userRepository.deleteById(user.get().id)
    }
}

在单元测试并且只能在单元测试中(kotlin1.2)可以使用反引号来定义方法

总结

使用使用kotlin结合SpringBoot是一种从船新体验, 推荐大家尝试一下

项目地址

查看原文

赞 6 收藏 11 评论 0

zwillon 回答了问题 · 2018-07-16

PyQt5中QGraphicsScene获取mousePressEvent事件鼠标位置全都为0.0

改成用scenePos

def mousePressEvent(self, event):
    QGraphicsScene.mousePressEvent(self, event)
    e = event.scenePos()
    print(e)

关注 3 回答 2

zwillon 回答了问题 · 2018-07-10

百度面试题,如何快速找出文件(大文件无法一次性读取)中的重复项?

首先把ip转成int(网上一堆算法请百度)
然后做一个大的数组
比如192.168.3.4转换后为3232236292
对应的数组为 int array[3232236292-1]做计数即可
再简化一下,由于数组要预先分配,这里用map就行

关注 9 回答 5

zwillon 回答了问题 · 2018-07-01

PyQt5就已经自带Qt库了吗?

PyQt4或者PyQt5已经通过sip的方式将Qt的相关动态库封装起来进行调用
所以你只用基于python编程就能调用相关的库了
Pycharm和Eric都可以用来进行开发

clipboard.png

关注 3 回答 2

zwillon 收藏了文章 · 2018-07-01

静音检测VAD算法

最近把opus编码器里的VAD算法提取了出来,之前在网上没找到合适的开源VAD模块,就把代码放在这里吧,希望能帮助到人。
下面是.h文件和.cpp文件,使用的时候,需要调用silk_VAD_Get()这个函数,每次输入一个帧(我默认了帧长是20ms,采样率16khz,可以自己在silk_VAD_Get里修改),返回0或者1,代表该帧是否为静音帧。
.h文件代码:

#include <stdlib.h>
#include <malloc.h>
#include <intrin.h>
#include <string.h>

int silk_VAD_Get(
    //int          state,                       /*  Encoder state                               */
    const short            pIn[]                           /* I    PCM input                                   */
);

#define TYPE_NO_VOICE_ACTIVITY                  0
#define TYPE_UNVOICED                           1
#define TYPE_VOICED                             2

#define SPEECH_ACTIVITY_DTX_THRES                       0.05f
#define SILK_FIX_CONST( C, Q )              ((int)((C) * ((long)1 << (Q)) + 0.5))
#define silk_int16_MAX   0x7FFF                               /*  2^15 - 1 =  32767 */
#define silk_int16_MIN   ((short)0x8000)                 /* -2^15     = -32768 */
#define silk_int32_MAX   0x7FFFFFFF                           /*  2^31 - 1 =  2147483647 */
#define silk_int32_MIN   ((int)0x80000000)             /* -2^31     = -2147483648 */
#define silk_memset(dest, src, size)        memset((dest), (src), (size))

#define VAD_NOISE_LEVEL_SMOOTH_COEF_Q16         1024    /* Must be <  4096 */
#define VAD_NOISE_LEVELS_BIAS                   50

/* Sigmoid settings */
#define VAD_NEGATIVE_OFFSET_Q5                  128     /* sigmoid is 0 at -128 */
#define VAD_SNR_FACTOR_Q16                      45000

/* smoothing for SNR measurement */
#define VAD_SNR_SMOOTH_COEF_Q18                 4096

#define VAD_N_BANDS 4
#define VAD_INTERNAL_SUBFRAMES_LOG2             2
#define VAD_INTERNAL_SUBFRAMES                  ( 1 << VAD_INTERNAL_SUBFRAMES_LOG2 )
#define silk_uint8_MAX   0xFF                                 /*  2^8 - 1 = 255 */

#define VARDECL(type, var) type *var
#define silk_RSHIFT32(a, shift)             ((a)>>(shift))
#define silk_RSHIFT(a, shift)             ((a)>>(shift))
#define silk_LSHIFT32(a, shift)             ((a)<<(shift))
#define silk_LSHIFT(a, shift)             ((a)<<(shift))
#define ALLOC(var, size, type) var = ((type*)alloca(sizeof(type)*(size)))
#define silk_ADD16(a, b)                    ((a) + (b))
#define silk_ADD32(a, b)                    ((a) + (b))
#define silk_ADD64(a, b)                    ((a) + (b))

#define silk_SUB16(a, b)                    ((a) - (b))
#define silk_SUB32(a, b)                    ((a) - (b))
#define silk_SUB64(a, b)                    ((a) - (b))
#define silk_SMULWB(a32, b32)            ((((a32) >> 16) * (int)((short)(b32))) + ((((a32) & 0x0000FFFF) * (int)((short)(b32))) >> 16))
#define silk_SMLAWB(a32, b32, c32)       ((a32) + ((((b32) >> 16) * (int)((short)(c32))) + ((((b32) & 0x0000FFFF) * (int)((short)(c32))) >> 16)))
#define silk_SAT16(a)                       ((a) > silk_int16_MAX ? silk_int16_MAX :      \
                                            ((a) < silk_int16_MIN ? silk_int16_MIN : (a)))
#define silk_MLA(a32, b32, c32)             silk_ADD32((a32),((b32) * (c32)))
#define silk_SMLABB(a32, b32, c32)       ((a32) + ((int)((short)(b32))) * (int)((short)(c32)))
#define silk_ADD_POS_SAT32(a, b)            ((((unsigned int)(a)+(unsigned int)(b)) & 0x80000000) ? silk_int32_MAX : ((a)+(b)))
#define silk_ADD_POS_SAT32(a, b)            ((((unsigned int)(a)+(unsigned int)(b)) & 0x80000000) ? silk_int32_MAX : ((a)+(b)))
#define silk_DIV32_16(a32, b16)             ((int)((a32) / (b16)))
#define silk_DIV32(a32, b32)                ((int)((a32) / (b32)))
#define silk_RSHIFT_ROUND(a, shift)         ((shift) == 1 ? ((a) >> 1) + ((a) & 1) : (((a) >> ((shift) - 1)) + 1) >> 1)

#define silk_SMULWW(a32, b32)            silk_MLA(silk_SMULWB((a32), (b32)), (a32), silk_RSHIFT_ROUND((b32), 16))
#define silk_min(a, b)                      (((a) < (b)) ? (a) : (b))
#define silk_max(a, b)                      (((a) > (b)) ? (a) : (b))
#define silk_ADD_LSHIFT32(a, b, shift)      silk_ADD32((a), silk_LSHIFT32((b), (shift)))    /* shift >= 0 */
#define silk_MUL(a32, b32)                  ((a32) * (b32))
#define silk_SMULBB(a32, b32)            ((int)((short)(a32)) * (int)((short)(b32)))
#define silk_LIMIT( a, limit1, limit2)      ((limit1) > (limit2) ? ((a) > (limit1) ? (limit1) : ((a) < (limit2) ? (limit2) : (a))) \
                                                                 : ((a) > (limit2) ? (limit2) : ((a) < (limit1) ? (limit1) : (a))))

#define silk_LSHIFT_SAT32(a, shift)         (silk_LSHIFT32( silk_LIMIT( (a), silk_RSHIFT32( silk_int32_MIN, (shift) ), \
                                                    silk_RSHIFT32( silk_int32_MAX, (shift) ) ), (shift) ))







static const int tiltWeights[VAD_N_BANDS] = { 30000, 6000, -12000, -12000 };
static const int sigm_LUT_neg_Q15[6] = {
    16384, 8812, 3906, 1554, 589, 219
};
static const int sigm_LUT_slope_Q10[6] = {
    237, 153, 73, 30, 12, 7
};
static const int sigm_LUT_pos_Q15[6] = {
    16384, 23955, 28861, 31213, 32178, 32548
};

static __inline int ec_bsr(unsigned long _x) {
    unsigned long ret;
    _BitScanReverse(&ret, _x);
    return (int)ret;
}
# define EC_CLZ0    (1)
# define EC_CLZ(_x) (-ec_bsr(_x))
# define EC_ILOG(_x) (EC_CLZ0-EC_CLZ(_x))
static int silk_min_int(int a, int b)
{
    return (((a) < (b)) ? (a) : (b));
}
static int silk_max_int(int a, int b)
{
    return (((a) > (b)) ? (a) : (b));
}
static int silk_max_32(int a, int b)
{
    return (((a) > (b)) ? (a) : (b));
}
static  int silk_CLZ32(int in32)
{
    return in32 ? 32 - EC_ILOG(in32) : 32;
}
static  int silk_ROR32(int a32, int rot)
{
    unsigned int x = (unsigned int)a32;
    unsigned int r = (unsigned int)rot;
    unsigned int m = (unsigned int)-rot;
    if (rot == 0) {
        return a32;
    }
    else if (rot < 0) {
        return (int)((x << m) | (x >> (32 - m)));
    }
    else {
        return (int)((x << (32 - r)) | (x >> r));
    }
}
static  void silk_CLZ_FRAC(
    int in,            /* I  input                               */
    int *lz,           /* O  number of leading zeros             */
    int *frac_Q7       /* O  the 7 bits right after the leading one */
)
{
    int lzeros = silk_CLZ32(in);

    *lz = lzeros;
    *frac_Q7 = silk_ROR32(in, 24 - lzeros) & 0x7f;
}


/* Approximation of square root                                          */
/* Accuracy: < +/- 10%  for output values > 15                           */
/*           < +/- 2.5% for output values > 120                          */
static  int silk_SQRT_APPROX(int x)
{
    int y, lz, frac_Q7;

    if (x <= 0) {
        return 0;
    }

    silk_CLZ_FRAC(x, &lz, &frac_Q7);

    if (lz & 1) {
        y = 32768;
    }
    else {
        y = 46214;        /* 46214 = sqrt(2) * 32768 */
    }

    /* get scaling right */
    y >>= silk_RSHIFT(lz, 1);

    /* increment using fractional part of input */
    y = silk_SMLAWB(y, y, silk_SMULBB(213, frac_Q7));

    return y;
}

.cpp文件代码:


#include "opusvad.h"
#include <stdlib.h>

static short A_fb1_20 = 5394 << 1;
static short A_fb1_21 = -24290; /* (int16)(20623 << 1) */

typedef struct {
    int                  AnaState[2];                  /* Analysis filterbank state: 0-8 kHz                                   */
    int                  AnaState1[2];                 /* Analysis filterbank state: 0-4 kHz                                   */
    int                  AnaState2[2];                 /* Analysis filterbank state: 0-2 kHz                                   */
    int                  XnrgSubfr[4];       /* Subframe energies                                                    */
    int                  NrgRatioSmth_Q8[VAD_N_BANDS]; /* Smoothed energy level in each band                                   */
    short                 HPstate;                        /* State of differentiator in the lowest band                           */
    int                  NL[VAD_N_BANDS];              /* Noise energy level in each band                                      */
    int                  inv_NL[VAD_N_BANDS];          /* Inverse noise energy level in each band                              */
    int                  NoiseLevelBias[VAD_N_BANDS];  /* Noise level estimator bias/offset                                    */
    int                  counter;                        /* Frame counter used in the initial phase                              */
} VAD_state;

/* Split signal into two decimated bands using first-order allpass filters */
void silk_ana_filt_bank_1(
    const short            *in,                /* I    Input signal [N]                                            */
    int                  *S,                 /* I/O  State vector [2]                                            */
    short                  *outL,              /* O    Low band [N/2]                                              */
    short                  *outH,              /* O    High band [N/2]                                             */
    const int            N                   /* I    Number of input samples                                     */
)
{
    int      k, N2 = silk_RSHIFT(N, 1);
    int    in32, X, Y, out_1, out_2;

    /* Internal variables and state are in Q10 format */
    for (k = 0; k < N2; k++) {
        /* Convert to Q10 */
        in32 = silk_LSHIFT((int)in[2 * k], 10);

        /* All-pass section for even input sample */
        Y = silk_SUB32(in32, S[0]);
        X = silk_SMLAWB(Y, Y, A_fb1_21);
        out_1 = silk_ADD32(S[0], X);
        S[0] = silk_ADD32(in32, X);

        /* Convert to Q10 */
        in32 = silk_LSHIFT((int)in[2 * k + 1], 10);

        /* All-pass section for odd input sample, and add to output of previous section */
        Y = silk_SUB32(in32, S[1]);
        X = silk_SMULWB(Y, A_fb1_20);
        out_2 = silk_ADD32(S[1], X);
        S[1] = silk_ADD32(in32, X);

        /* Add/subtract, convert back to int16 and store to output */
        outL[k] = (short)silk_SAT16(silk_RSHIFT_ROUND(silk_ADD32(out_2, out_1), 11));
        outH[k] = (short)silk_SAT16(silk_RSHIFT_ROUND(silk_SUB32(out_2, out_1), 11));
    }
}

void silk_VAD_GetNoiseLevels(
    const int            pX[VAD_N_BANDS],  /* I    subband energies                            */
    VAD_state              *psSilk_VAD         /* I/O  Pointer to Silk VAD state                   */
)
{
    int   k;
    int nl, nrg, inv_nrg;
    int   coef, min_coef;

    /* Initially faster smoothing */
    if (psSilk_VAD->counter < 1000) { /* 1000 = 20 sec */
        min_coef = silk_DIV32_16(silk_int16_MAX, silk_RSHIFT(psSilk_VAD->counter, 4) + 1);
    }
    else {
        min_coef = 0;
    }

    for (k = 0; k < VAD_N_BANDS; k++) {
        /* Get old noise level estimate for current band */
        nl = psSilk_VAD->NL[k];
        //silk_assert(nl >= 0);

        /* Add bias */
        nrg = silk_ADD_POS_SAT32(pX[k], psSilk_VAD->NoiseLevelBias[k]);
        //silk_assert(nrg > 0);

        /* Invert energies */
        inv_nrg = silk_DIV32(silk_int32_MAX, nrg);
        //silk_assert(inv_nrg >= 0);

        /* Less update when subband energy is high */
        if (nrg > silk_LSHIFT(nl, 3)) {
            coef = VAD_NOISE_LEVEL_SMOOTH_COEF_Q16 >> 3;
        }
        else if (nrg < nl) {
            coef = VAD_NOISE_LEVEL_SMOOTH_COEF_Q16;
        }
        else {
            coef = silk_SMULWB(silk_SMULWW(inv_nrg, nl), VAD_NOISE_LEVEL_SMOOTH_COEF_Q16 << 1);
        }

        /* Initially faster smoothing */
        coef = silk_max_int(coef, min_coef);

        /* Smooth inverse energies */
        psSilk_VAD->inv_NL[k] = silk_SMLAWB(psSilk_VAD->inv_NL[k], inv_nrg - psSilk_VAD->inv_NL[k], coef);
        //silk_assert(psSilk_VAD->inv_NL[k] >= 0);

        /* Compute noise level by inverting again */
        nl = silk_DIV32(silk_int32_MAX, psSilk_VAD->inv_NL[k]);
        //silk_assert(nl >= 0);

        /* Limit noise levels (guarantee 7 bits of head room) */
        nl = silk_min(nl, 0x00FFFFFF);

        /* Store as part of state */
        psSilk_VAD->NL[k] = nl;
    }

    /* Increment frame counter */
    psSilk_VAD->counter++;
}

int silk_lin2log(
    const int            inLin               /* I  input in linear scale                                         */
)
{
    int lz, frac_Q7;

    silk_CLZ_FRAC(inLin, &lz, &frac_Q7);

    /* Piece-wise parabolic approximation */
    return silk_ADD_LSHIFT32(silk_SMLAWB(frac_Q7, silk_MUL(frac_Q7, 128 - frac_Q7), 179), 31 - lz, 7);
}

int silk_sigm_Q15(
    int                    in_Q5               /* I                                                                */
)
{
    int ind;

    if (in_Q5 < 0) {
        /* Negative input */
        in_Q5 = -in_Q5;
        if (in_Q5 >= 6 * 32) {
            return 0;        /* Clip */
        }
        else {
            /* Linear interpolation of look up table */
            ind = silk_RSHIFT(in_Q5, 5);
            return(sigm_LUT_neg_Q15[ind] - silk_SMULBB(sigm_LUT_slope_Q10[ind], in_Q5 & 0x1F));
        }
    }
    else {
        /* Positive input */
        if (in_Q5 >= 6 * 32) {
            return 32767;        /* clip */
        }
        else {
            /* Linear interpolation of look up table */
            ind = silk_RSHIFT(in_Q5, 5);
            return(sigm_LUT_pos_Q15[ind] + silk_SMULBB(sigm_LUT_slope_Q10[ind], in_Q5 & 0x1F));
        }
    }
}
int silk_VAD_Init(                                         /* O    Return value, 0 if success                  */
    VAD_state              *psSilk_VAD                     /* I/O  Pointer to Silk VAD state                   */
)
{
    int b, ret = 0;

    /* reset state memory */
    silk_memset(psSilk_VAD, 0, sizeof(VAD_state));

    /* init noise levels */
    /* Initialize array with approx pink noise levels (psd proportional to inverse of frequency) */
    for (b = 0; b < VAD_N_BANDS; b++) {
        psSilk_VAD->NoiseLevelBias[b] = silk_max_32(silk_DIV32_16(VAD_NOISE_LEVELS_BIAS, b + 1), 1);
    }

    /* Initialize state */
    for (b = 0; b < VAD_N_BANDS; b++) {
        psSilk_VAD->NL[b] = silk_MUL(100, psSilk_VAD->NoiseLevelBias[b]);
        psSilk_VAD->inv_NL[b] = silk_DIV32(silk_int32_MAX, psSilk_VAD->NL[b]);
    }
    psSilk_VAD->counter = 15;

    /* init smoothed energy-to-noise ratio*/
    for (b = 0; b < VAD_N_BANDS; b++) {
        psSilk_VAD->NrgRatioSmth_Q8[b] = 100 * 256;       /* 100 * 256 --> 20 dB SNR */
    }

    return(ret);
}

static int noSpeechCounter;

int silk_VAD_Get(
    //int          state,                       /*  Encoder state                               */
    const short            pIn[]                           /* I    PCM input                                   */
)
{
    int   SA_Q15, pSNR_dB_Q7, input_tilt;
    int   decimated_framelength1, decimated_framelength2;
    int   decimated_framelength;
    int   dec_subframe_length, dec_subframe_offset, SNR_Q7, i, b, s;
    int sumSquared, smooth_coef_Q16;
    short HPstateTmp;
    VARDECL(short, X);
    int Xnrg[4];
    int NrgToNoiseRatio_Q8[4];
    int speech_nrg, x_tmp;
    int   X_offset[4];
    int   ret = 0;
    int frame_length = 20;//
    int fs_kHz = 16;
    int  input_quality_bands_Q15[VAD_N_BANDS];
    int signalType;
    int VAD_flag;
    /* Safety checks
    silk_assert(4 == 4);
    silk_assert(MAX_FRAME_LENGTH >= frame_length);
    silk_assert(frame_length <= 512);
    silk_assert(frame_length == 8 * silk_RSHIFT(frame_length, 3));
    */
    /***********************/
    /* Filter and Decimate */
    /***********************/
    decimated_framelength1 = silk_RSHIFT(frame_length, 1);
    decimated_framelength2 = silk_RSHIFT(frame_length, 2);
    decimated_framelength = silk_RSHIFT(frame_length, 3);
    /* Decimate into 4 bands:
    0       L      3L       L              3L                             5L
    -      --       -              --                             --
    8       8       2               4                              4

    [0-1 kHz| temp. |1-2 kHz|    2-4 kHz    |            4-8 kHz           |

    They're arranged to allow the minimal ( frame_length / 4 ) extra
    scratch space during the downsampling process */
    X_offset[0] = 0;
    X_offset[1] = decimated_framelength + decimated_framelength2;
    X_offset[2] = X_offset[1] + decimated_framelength;
    X_offset[3] = X_offset[2] + decimated_framelength2;
    ALLOC(X, X_offset[3] + decimated_framelength1, short);
    VAD_state *psSilk_VAD;
    psSilk_VAD = (VAD_state*)malloc(sizeof(VAD_state));
    int ret1 = silk_VAD_Init(psSilk_VAD);



    /* 0-8 kHz to 0-4 kHz and 4-8 kHz */
    silk_ana_filt_bank_1(pIn, &psSilk_VAD->AnaState[0],
        X, &X[X_offset[3]], frame_length);

    /* 0-4 kHz to 0-2 kHz and 2-4 kHz */
    silk_ana_filt_bank_1(X, &psSilk_VAD->AnaState1[0],
        X, &X[X_offset[2]], decimated_framelength1);

    /* 0-2 kHz to 0-1 kHz and 1-2 kHz */
    silk_ana_filt_bank_1(X, &psSilk_VAD->AnaState2[0],
        X, &X[X_offset[1]], decimated_framelength2);

    /*********************************************/
    /* HP filter on lowest band (differentiator) */
    /*********************************************/
    X[decimated_framelength - 1] = silk_RSHIFT(X[decimated_framelength - 1], 1);
    HPstateTmp = X[decimated_framelength - 1];
    for (i = decimated_framelength - 1; i > 0; i--) {
        X[i - 1] = silk_RSHIFT(X[i - 1], 1);
        X[i] -= X[i - 1];
    }
    X[0] -= psSilk_VAD->HPstate;
    psSilk_VAD->HPstate = HPstateTmp;

    /*************************************/
    /* Calculate the energy in each band */
    /*************************************/
    for (b = 0; b < 4; b++) {
        /* Find the decimated framelength in the non-uniformly divided bands */
        decimated_framelength = silk_RSHIFT(frame_length, silk_min_int(4 - b, 4 - 1));

        /* Split length into subframe lengths */
        dec_subframe_length = silk_RSHIFT(decimated_framelength, VAD_INTERNAL_SUBFRAMES_LOG2);
        dec_subframe_offset = 0;

        /* Compute energy per sub-frame */
        /* initialize with summed energy of last subframe */
        Xnrg[b] = psSilk_VAD->XnrgSubfr[b];
        for (s = 0; s < VAD_INTERNAL_SUBFRAMES; s++) {
            sumSquared = 0;
            for (i = 0; i < dec_subframe_length; i++) {
                /* The energy will be less than dec_subframe_length * ( silk_short_MIN / 8 ) ^ 2.            */
                /* Therefore we can accumulate with no risk of overflow (unless dec_subframe_length > 128)  */
                x_tmp = silk_RSHIFT(
                    X[X_offset[b] + i + dec_subframe_offset], 3);
                sumSquared = silk_SMLABB(sumSquared, x_tmp, x_tmp);

                /* Safety check */
                //silk_assert(sumSquared >= 0);
            }

            /* Add/saturate summed energy of current subframe */
            if (s < VAD_INTERNAL_SUBFRAMES - 1) {
                Xnrg[b] = silk_ADD_POS_SAT32(Xnrg[b], sumSquared);
            }
            else {
                /* Look-ahead subframe */
                Xnrg[b] = silk_ADD_POS_SAT32(Xnrg[b], silk_RSHIFT(sumSquared, 1));
            }

            dec_subframe_offset += dec_subframe_length;
        }
        psSilk_VAD->XnrgSubfr[b] = sumSquared;
    }

    /********************/
    /* Noise estimation */
    /********************/
    silk_VAD_GetNoiseLevels(&Xnrg[0], psSilk_VAD);

    /***********************************************/
    /* Signal-plus-noise to noise ratio estimation */
    /***********************************************/
    sumSquared = 0;
    input_tilt = 0;
    for (b = 0; b < 4; b++) {
        speech_nrg = Xnrg[b] - psSilk_VAD->NL[b];
        if (speech_nrg > 0) {
            /* Divide, with sufficient resolution */
            if ((Xnrg[b] & 0xFF800000) == 0) {
                NrgToNoiseRatio_Q8[b] = silk_DIV32(silk_LSHIFT(Xnrg[b], 8), psSilk_VAD->NL[b] + 1);
            }
            else {
                NrgToNoiseRatio_Q8[b] = silk_DIV32(Xnrg[b], silk_RSHIFT(psSilk_VAD->NL[b], 8) + 1);
            }

            /* Convert to log domain */
            SNR_Q7 = silk_lin2log(NrgToNoiseRatio_Q8[b]) - 8 * 128;

            /* Sum-of-squares */
            sumSquared = silk_SMLABB(sumSquared, SNR_Q7, SNR_Q7);          /* Q14 */

                                                                           /* Tilt measure */
            if (speech_nrg < ((int)1 << 20)) {
                /* Scale down SNR value for small subband speech energies */
                SNR_Q7 = silk_SMULWB(silk_LSHIFT(silk_SQRT_APPROX(speech_nrg), 6), SNR_Q7);
            }
            input_tilt = silk_SMLAWB(input_tilt, tiltWeights[b], SNR_Q7);
        }
        else {
            NrgToNoiseRatio_Q8[b] = 256;
        }
    }

    /* Mean-of-squares */
    sumSquared = silk_DIV32_16(sumSquared, 4); /* Q14 */

                                               /* Root-mean-square approximation, scale to dBs, and write to output pointer */
    pSNR_dB_Q7 = (short)(3 * silk_SQRT_APPROX(sumSquared)); /* Q7 */

                                                            /*********************************/
                                                            /* Speech Probability Estimation */
                                                            /*********************************/
    SA_Q15 = silk_sigm_Q15(silk_SMULWB(VAD_SNR_FACTOR_Q16, pSNR_dB_Q7) - VAD_NEGATIVE_OFFSET_Q5);

    /**************************/
    /* Frequency Tilt Measure */
    /**************************/
    int input_tilt_Q15 = silk_LSHIFT(silk_sigm_Q15(input_tilt) - 16384, 1);

    /**************************************************/
    /* Scale the sigmoid output based on power levels */
    /**************************************************/
    speech_nrg = 0;
    for (b = 0; b < 4; b++) {
        /* Accumulate signal-without-noise energies, higher frequency bands have more weight */
        speech_nrg += (b + 1) * silk_RSHIFT(Xnrg[b] - psSilk_VAD->NL[b], 4);
    }

    /* Power scaling */
    if (speech_nrg <= 0) {
        SA_Q15 = silk_RSHIFT(SA_Q15, 1);
    }
    else if (speech_nrg < 32768) {
        if (frame_length == 10 * fs_kHz) {
            speech_nrg = silk_LSHIFT_SAT32(speech_nrg, 16);
        }
        else {
            speech_nrg = silk_LSHIFT_SAT32(speech_nrg, 15);
        }

        /* square-root */
        speech_nrg = silk_SQRT_APPROX(speech_nrg);
        SA_Q15 = silk_SMULWB(32768 + speech_nrg, SA_Q15);
    }

    /* Copy the resulting speech activity in Q8 */
    int speech_activity_Q8 = silk_min_int(silk_RSHIFT(SA_Q15, 7), silk_uint8_MAX);

    /***********************************/
    /* Energy Level and SNR estimation */
    /***********************************/
    /* Smoothing coefficient */
    smooth_coef_Q16 = silk_SMULWB(VAD_SNR_SMOOTH_COEF_Q18, silk_SMULWB((int)SA_Q15, SA_Q15));

    if (frame_length == 10 * fs_kHz) {
        smooth_coef_Q16 >>= 1;
    }

    for (b = 0; b < 4; b++) {
        /* compute smoothed energy-to-noise ratio per band */
        psSilk_VAD->NrgRatioSmth_Q8[b] = silk_SMLAWB(psSilk_VAD->NrgRatioSmth_Q8[b],
            NrgToNoiseRatio_Q8[b] - psSilk_VAD->NrgRatioSmth_Q8[b], smooth_coef_Q16);

        /* signal to noise ratio in dB per band */
        SNR_Q7 = 3 * (silk_lin2log(psSilk_VAD->NrgRatioSmth_Q8[b]) - 8 * 128);
        /* quality = sigmoid( 0.25 * ( SNR_dB - 16 ) ); */
        input_quality_bands_Q15[b] = silk_sigm_Q15(silk_RSHIFT(SNR_Q7 - 16 * 128, 4));
    }
    //gap************************************************************//
    if (speech_activity_Q8 < SILK_FIX_CONST(SPEECH_ACTIVITY_DTX_THRES, 8)) {
        signalType = TYPE_NO_VOICE_ACTIVITY;
        //noSpeechCounter++;
        VAD_flag = 0;
    }
    else {
        signalType = TYPE_UNVOICED;
        VAD_flag = 1;
    }
    free(psSilk_VAD);
    return(VAD_flag);
}
查看原文

zwillon 回答了问题 · 2018-05-29

解决启动 pycharm 报错。 Error: Unable to detect graphics environment

找个Desktop跑,服务器的话需要装图形界面的GUI驱动

关注 3 回答 2

zwillon 回答了问题 · 2018-04-07

Java可能比C++快吗?为什么?

1) 看下这个https://benchmarksgame-team.p...,这个基本上Cpp无一落败
2) 除了-O2外,还有-O3,还有其他的编译参数,请参见GCC手册
3) JVM默认是用空间换时间的,所以这么对比不是很适合

你的代码,我在未优化一行代码的情况下,使用-O3来测试(实际上release都是-O3),运算3次都比Java快,这中间还包括加载程序启动的时间,怎么得出结论会比java慢呢。

当然,C++对程序员要求很高,不了解内存模型、编译原理什么的,是很难写出高质量的C++的,在这一点上,java就好很多

最近买了一个树莓派3B+,特意跑了下这个程序,从性能上看,C++比java在Arm上略快,优势不明显,另外写了一个rust版本的代码,算法上未优化,性能跟C++接近,在release情况下比Java略快。

fn main() {
    let input_num=100001;
    let mut pp_count =0;
    for  each in 2..input_num {
        let mut factorization_lst=0;
        for  factor in 1..each+1 {
            if each%factor==0 &&!(factor>each/factor) {
                factorization_lst += 1;
            }
        }
        if factorization_lst==1
        {
            let mut antitone =0;
            let mut each_cpy =each;
            while each_cpy != 0
            {
                antitone=antitone*10+each_cpy%10;
                each_cpy/=10;
            }
            if antitone==each
            {
               pp_count += 1;
               println!("{}:{}", pp_count, each);
            }
        }
    }
}

从CPU上来,基本上在运行期这3个程序都是跑满单核的(树莓派3B+有4个core),但内存上来看,C++和rust有明显优势,大概为java版本的1/10.
这个测试从测试结果来看,这几个语言的运行性能差异没那么大,分析了下有几个原因
1) 对于int数据类型,在java C++ rust里都是原生类型,原生的在计算时差别不大,对于java来说,如果是Integer可能有性能损耗。
2) 只跑了一个核,没有多核之间的数据传递等
3) 没有用到递归、值传递、引用传递、值拷贝等特性,差异不大。

结论: java是一个性价比比较好的语言,运行性能上或许不是最优,但开发效率很好,不像其他的语言要考虑跨平台移植问题、编译参数问题等。
PS 未来看好rust

关注 14 回答 8

zwillon 回答了问题 · 2018-01-22

解决sys.path和os.environ['PATH']啥区别?

sys.path是python package的加载目录,比如说定义flask从哪个目录加载。
而PATH是环境变量,是系统自定义的,不管是Windows还是Linux,PATH相关的目录是可执行文件的目录,比如你在命令行执行java,在没有指定绝对路径的前提下,加载的目录是定义在PATH里的
题主我更新下答案
sys.path其实定义了一个python package加载的目录
比如我在2个目录下有2个同名的python文件,比如hello.py,见下图

clipboard.png
上图在test目录下分别有2个package t1和t2,其中都有重名的文件hello.py

你可以把test想象成别的小伙伴开发的package
我们现在电脑上的开发目录是/data,注意,这个跟引用的package完全是2个不同的目录

clipboard.png

仔细看图中的代码,通过操作sys.path来设定package的加载顺序。
实际开发中,python的加载package顺序可以通过配置sys.path来实现不同的package加载。
虽然可以这么做,但请注意,在程序运行期间动态的修改sys.path是一个非常不好的习惯,会导致很多难以排查的bug。一般有特殊的加载顺序要求的,需要通过设置PYTHONPATH来实现。
sys.path除了PYTHONPATH定义的module之外,还有内置的built in加载模块。

如果对这部分的实现很感兴趣,可以读下python library的importlib一章节。

clipboard.png

关注 3 回答 2

zwillon 回答了问题 · 2018-01-14

解决深度学习显卡

深度学习的话,1000的显卡有点吃力了,性价比最好的是1080Ti

关注 3 回答 2

zwillon 回答了问题 · 2018-01-14

解决Windows长时间任务运行完成的提醒

这个叫系统托盘,PyQt可以做类似的事
可以参考
https://stackoverflow.com/que...

关注 3 回答 2

认证与成就

  • 获得 169 次点赞
  • 获得 28 枚徽章 获得 2 枚金徽章, 获得 9 枚银徽章, 获得 17 枚铜徽章

擅长技能
编辑

(゚∀゚ )
暂时没有

开源项目 & 著作
编辑

(゚∀゚ )
暂时没有

注册于 2015-12-02
个人主页被 1.6k 人浏览