博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
人工智能资料库:第9辑(20170113)
阅读量:2442 次
发布时间:2019-05-10

本文共 4364 字,大约阅读时间需要 14 分钟。


  1. 【论文集】The classical papers and codes about generative adversarial nets

简介:

这是一个关于生成对抗网络的经典论文和代码集。

原文链接:


2.【博客】Deep Learning Paper Implementations: Spatial Transformer Networks - Part I

简介:

The first three blog posts in my “Deep Learning Paper Implementations” series will cover  introduced by Max Jaderberg, Karen Simonyan, Andrew Zisserman and Koray Kavukcuoglu of Google Deepmind in 2016. The Spatial Transformer Network is a learnable module aimed at increasing the spatial invariance of Convolutional Neural Networks in a computationally and parameter efficient manner.

原文链接:


3.【博客】GTA V + Universe

简介:

The  integration with , built and maintained by Craig Quiter’s  project, is now open-source. To use it, you’ll just need a purchased copy of GTA V, and then your Universe agent will be able to start driving a car around the streets of a high-fidelity virtual world.

GTA V in Universe gives AI agents access to a rich, 3D world. This video shows the frames fed to the agent (artificially slowed to 8FPS, top left), diagnostics from the agent and environment (bottom left), and a human-friendly free camera view (right). The integration modifies the behavior of people within GTA V to be non-violent.

原文链接:


4.【论文&代码】Learning Python Code Suggestion with a Sparse Pointer Network

简介:

To enhance developer productivity, all modern integrated development environments(IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.

原文链接:

代码链接:


5.【论文&代码】 StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks

简介:

Synthesizing photo-realistic images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose stacked Generative Adversarial Networks (StackGAN) to generate photo-realistic images conditioned on text descriptions. The Stage-I GAN sketches the primitive shape and basic colors of the object based on the given text description, yielding Stage-I low resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high resolution images with photorealistic details. The Stage-II GAN is able to rectify defects and add compelling details with the refinement process. Samples generated by StackGAN are more plausible than those generated by existing approaches. Importantly, our StackGAN for the first time generates realistic 256 × 256 images conditioned on only text descriptions, while state-of-the-art methods can generate at most 128 × 128 images. To demonstrate the effectiveness of the proposed StackGAN, extensive experiments are conducted on CUB and Oxford-102 datasets, which contain enough object appearance variations and are widely-used for text-toimage generation analysis.

原文链接:

代码链接:


转载地址:http://xpdqb.baihongyu.com/

你可能感兴趣的文章
react useref_如何使用useRef React钩子
查看>>
axios 请求node_使用Axios的Node中的HTTP请求
查看>>
node http模块_Node http模块
查看>>
如何使用Hugo建立博客
查看>>
macos sqlite_如何在macOS上安装SQLite
查看>>
next. js_Next.js应用程序捆绑
查看>>
setimmediate_了解setImmediate()
查看>>
json简介_JSON简介
查看>>
npm 语义化发布_使用npm的语义版本控制
查看>>
如何在ES模块中使用顶级等待
查看>>
卸载npm和安装npm_使用`npm uninstall`卸载npm软件包
查看>>
ER数据模型简介
查看>>
Object valueOf()方法
查看>>
git可视化工具使用_使用Go可视化您本地的Git贡献
查看>>
JavaScript中的call()和apply()
查看>>
node 发出ajax请求_使用Node发出HTTP请求
查看>>
Object isSealed()方法
查看>>
怎么在github上看密钥_我在GitHub上发布了密码/ API密钥
查看>>
股票历史行情数据api_历史API
查看>>
javascript 类_如何使用JavaScript类
查看>>