date
stringdate 2023-05-04 00:00:00
2025-08-27 00:00:00
| arxiv_id
stringlengths 10
10
| votes
int32 0
110M
| title
stringlengths 8
206
| abstract
stringlengths 165
1.92k
| url
stringlengths 40
40
|
|---|---|---|---|---|---|
2023-05-04
|
2305.03048
| 9
|
Personalize Segment Anything Model with One Shot
|
Driven by large-data pre-training, Segment Anything Model (SAM) has been
demonstrated as a powerful and promptable framework, revolutionizing the
segmentation models. Despite the generality, customizing SAM for specific
visual concepts without man-powered prompting is under explored, e.g.,
automatically segmenting your pet dog in different images. In this paper, we
propose a training-free Personalization approach for SAM, termed as PerSAM.
Given only a single image with a reference mask, PerSAM first localizes the
target concept by a location prior, and segments it within other images or
videos via three techniques: target-guided attention, target-semantic
prompting, and cascaded post-refinement. In this way, we effectively adapt SAM
for private use without any training. To further alleviate the mask ambiguity,
we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the
entire SAM, we introduce two learnable weights for multi-scale masks, only
training 2 parameters within 10 seconds for improved performance. To
demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for
personalized evaluation, and test our methods on video object segmentation with
competitive performance. Besides, our approach can also enhance DreamBooth to
personalize Stable Diffusion for text-to-image generation, which discards the
background disturbance for better target appearance learning. Code is released
at https://github.com/ZrrSkywalker/Personalize-SAM
|
https://huggingface.co/papers/2305.03048
|
2023-05-04
|
2305.02483
| 3
|
ChatGPT-steered Editing Instructor for Customization of Abstractive
Summarization
|
Tailoring outputs from large language models, like ChatGPT, to implicit user
preferences remains a challenge despite their impressive generative
capabilities. In this paper, we propose a tri-agent generation pipeline
comprising a generator, an instructor, and an editor to enhance output
personalization. The generator produces an initial output, the instructor
automatically generates editing instructions based on user preferences, and the
editor refines the output to align with those preferences. The inference-only
large language model (ChatGPT) serves as both the generator and editor, with a
smaller model acting as the instructor to guide output generation. We train the
instructor using editor-steered reinforcement learning, leveraging feedback
from a large-scale editor model to optimize instruction generation.
Experimental results on two abstractive summarization datasets demonstrate the
effectiveness of our approach in generating outputs that better meet user
expectations. Code is available at
\url{https://github.com/Wendy-Xiao/chatgpt_editing_summ}
|
https://huggingface.co/papers/2305.02483
|
2023-05-04
|
2305.02463
| 3
|
Shap-E: Generating Conditional 3D Implicit Functions
|
We present Shap-E, a conditional generative model for 3D assets. Unlike
recent work on 3D generative models which produce a single output
representation, Shap-E directly generates the parameters of implicit functions
that can be rendered as both textured meshes and neural radiance fields. We
train Shap-E in two stages: first, we train an encoder that deterministically
maps 3D assets into the parameters of an implicit function; second, we train a
conditional diffusion model on outputs of the encoder. When trained on a large
dataset of paired 3D and text data, our resulting models are capable of
generating complex and diverse 3D assets in a matter of seconds. When compared
to Point-E, an explicit generative model over point clouds, Shap-E converges
faster and reaches comparable or better sample quality despite modeling a
higher-dimensional, multi-representation output space. We release model
weights, inference code, and samples at https://github.com/openai/shap-e.
|
https://huggingface.co/papers/2305.02463
|
2023-05-04
|
2305.03047
| 1
|
Principle-Driven Self-Alignment of Language Models from Scratch with
Minimal Human Supervision
|
Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised
fine-tuning (SFT) with human annotations and reinforcement learning from human
feedback (RLHF) to align the output of large language models (LLMs) with human
intentions, ensuring they are helpful, ethical, and reliable. However, this
dependence can significantly constrain the true potential of AI-assistant
agents due to the high cost of obtaining human supervision and the related
issues on quality, reliability, diversity, self-consistency, and undesirable
biases. To address these challenges, we propose a novel approach called
SELF-ALIGN, which combines principle-driven reasoning and the generative power
of LLMs for the self-alignment of AI agents with minimal human supervision. Our
approach encompasses four stages: first, we use an LLM to generate synthetic
prompts, and a topic-guided method to augment the prompt diversity; second, we
use a small set of human-written principles for AI models to follow, and guide
the LLM through in-context learning from demonstrations (of principles
application) to produce helpful, ethical, and reliable responses to user's
queries; third, we fine-tune the original LLM with the high-quality
self-aligned responses so that the resulting model can generate desirable
responses for each query directly without the principle set and the
demonstrations anymore; and finally, we offer a refinement step to address the
issues of overly-brief or indirect responses. Applying SELF-ALIGN to the
LLaMA-65b base language model, we develop an AI assistant named Dromedary. With
fewer than 300 lines of human annotations (including < 200 seed prompts, 16
generic principles, and 5 exemplars for in-context learning). Dromedary
significantly surpasses the performance of several state-of-the-art AI systems,
including Text-Davinci-003 and Alpaca, on benchmark datasets with various
settings.
|
https://huggingface.co/papers/2305.03047
|
2023-05-05
|
2305.02549
| 6
|
FormNetV2: Multimodal Graph Contrastive Learning for Form Document
Information Extraction
|
The recent advent of self-supervised pre-training techniques has led to a
surge in the use of multimodal learning in form document understanding.
However, existing approaches that extend the mask language modeling to other
modalities require careful multi-task tuning, complex reconstruction target
designs, or additional pre-training data. In FormNetV2, we introduce a
centralized multimodal graph contrastive learning strategy to unify
self-supervised pre-training for all modalities in one loss. The graph
contrastive objective maximizes the agreement of multimodal representations,
providing a natural interplay for all modalities without special customization.
In addition, we extract image features within the bounding box that joins a
pair of tokens connected by a graph edge, capturing more targeted visual cues
without loading a sophisticated and separately pre-trained image embedder.
FormNetV2 establishes new state-of-the-art performance on FUNSD, CORD, SROIE
and Payment benchmarks with a more compact model size.
|
https://huggingface.co/papers/2305.02549
|
2023-05-05
|
2305.03043
| 5
|
Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization
|
There is a growing demand for the accessible creation of high-quality 3D
avatars that are animatable and customizable. Although 3D morphable models
provide intuitive control for editing and animation, and robustness for
single-view face reconstruction, they cannot easily capture geometric and
appearance details. Methods based on neural implicit representations, such as
signed distance functions (SDF) or neural radiance fields, approach
photo-realism, but are difficult to animate and do not generalize well to
unseen data. To tackle this problem, we propose a novel method for constructing
implicit 3D morphable face models that are both generalizable and intuitive for
editing. Trained from a collection of high-quality 3D scans, our face model is
parameterized by geometry, expression, and texture latent codes with a learned
SDF and explicit UV texture parameterization. Once trained, we can reconstruct
an avatar from a single in-the-wild image by leveraging the learned prior to
project the image into the latent space of our model. Our implicit morphable
face models can be used to render an avatar from novel views, animate facial
expressions by modifying expression codes, and edit textures by directly
painting on the learned UV-texture maps. We demonstrate quantitatively and
qualitatively that our method improves upon photo-realism, geometry, and
expression accuracy compared to state-of-the-art methods.
|
https://huggingface.co/papers/2305.03043
|
2023-05-05
|
2305.03049
| 3
|
NeuralEditor: Editing Neural Radiance Fields via Manipulating Point
Clouds
|
This paper proposes NeuralEditor that enables neural radiance fields (NeRFs)
natively editable for general shape editing tasks. Despite their impressive
results on novel-view synthesis, it remains a fundamental challenge for NeRFs
to edit the shape of the scene. Our key insight is to exploit the explicit
point cloud representation as the underlying structure to construct NeRFs,
inspired by the intuitive interpretation of NeRF rendering as a process that
projects or "plots" the associated 3D point cloud to a 2D image plane. To this
end, NeuralEditor introduces a novel rendering scheme based on deterministic
integration within K-D tree-guided density-adaptive voxels, which produces both
high-quality rendering results and precise point clouds through optimization.
NeuralEditor then performs shape editing via mapping associated points between
point clouds. Extensive evaluation shows that NeuralEditor achieves
state-of-the-art performance in both shape deformation and scene morphing
tasks. Notably, NeuralEditor supports both zero-shot inference and further
fine-tuning over the edited scene. Our code, benchmark, and demo video are
available at https://immortalco.github.io/NeuralEditor.
|
https://huggingface.co/papers/2305.03049
|
2023-05-05
|
2305.02665
| 3
|
Learning Language-Specific Layers for Multilingual Machine Translation
|
Multilingual Machine Translation promises to improve translation quality
between non-English languages. This is advantageous for several reasons, namely
lower latency (no need to translate twice), and reduced error cascades (e.g.,
avoiding losing gender and formality information when translating through
English). On the downside, adding more languages reduces model capacity per
language, which is usually countered by increasing the overall model size,
making training harder and inference slower. In this work, we introduce
Language-Specific Transformer Layers (LSLs), which allow us to increase model
capacity, while keeping the amount of computation and the number of parameters
used in the forward pass constant. The key idea is to have some layers of the
encoder be source or target language-specific, while keeping the remaining
layers shared. We study the best way to place these layers using a neural
architecture search inspired approach, and achieve an improvement of 1.3 chrF
(1.5 spBLEU) points over not using LSLs on a separate decoder architecture, and
1.9 chrF (2.2 spBLEU) on a shared decoder one.
|
https://huggingface.co/papers/2305.02665
|
2023-05-05
|
2305.02499
| 3
|
AutoML-GPT: Automatic Machine Learning with GPT
|
AI tasks encompass a wide range of domains and fields. While numerous AI
models have been designed for specific tasks and applications, they often
require considerable human efforts in finding the right model architecture,
optimization algorithm, and hyperparameters. Recent advances in large language
models (LLMs) like ChatGPT show remarkable capabilities in various aspects of
reasoning, comprehension, and interaction. Consequently, we propose developing
task-oriented prompts and automatically utilizing LLMs to automate the training
pipeline. To implement this concept, we present the AutoML-GPT, which employs
GPT as the bridge to diverse AI models and dynamically trains models with
optimized hyperparameters. AutoML-GPT dynamically takes user requests from the
model and data cards and composes the corresponding prompt paragraph.
Ultimately, with this prompt paragraph, AutoML-GPT will automatically conduct
the experiments from data processing to model architecture, hyperparameter
tuning, and predicted training log. By leveraging {\ours}'s robust language
capabilities and the available AI models, AutoML-GPT can tackle numerous
intricate AI tasks across various tasks and datasets. This approach achieves
remarkable results in computer vision, natural language processing, and other
challenging areas. Extensive experiments and ablation studies demonstrate that
our method can be general, effective, and beneficial for many AI tasks.
|
https://huggingface.co/papers/2305.02499
|
2023-05-05
|
2305.02783
| 2
|
Automated Code generation for Information Technology Tasks in YAML
through Large Language Models
|
The recent improvement in code generation capabilities due to the use of
large language models has mainly benefited general purpose programming
languages. Domain specific languages, such as the ones used for IT Automation,
have received far less attention, despite involving many active developers and
being an essential component of modern cloud platforms. This work focuses on
the generation of Ansible-YAML, a widely used markup language for IT
Automation. We present Ansible Wisdom, a natural-language to Ansible-YAML code
generation tool, aimed at improving IT automation productivity. Ansible Wisdom
is a transformer-based model, extended by training with a new dataset
containing Ansible-YAML. We also develop two novel performance metrics for YAML
and Ansible to capture the specific characteristics of this domain. Results
show that Ansible Wisdom can accurately generate Ansible script from natural
language prompts with performance comparable or better than existing state of
the art code generation models.
|
https://huggingface.co/papers/2305.02783
|
2023-05-05
|
2305.03052
| 1
|
Tracking through Containers and Occluders in the Wild
|
Tracking objects with persistence in cluttered and dynamic environments
remains a difficult challenge for computer vision systems. In this paper, we
introduce $\textbf{TCOW}$, a new benchmark and model for visual tracking
through heavy occlusion and containment. We set up a task where the goal is to,
given a video sequence, segment both the projected extent of the target object,
as well as the surrounding container or occluder whenever one exists. To study
this task, we create a mixture of synthetic and annotated real datasets to
support both supervised learning and structured evaluation of model performance
under various forms of task variation, such as moving or nested containment. We
evaluate two recent transformer-based video models and find that while they can
be surprisingly capable of tracking targets under certain settings of task
variation, there remains a considerable performance gap before we can claim a
tracking model to have acquired a true notion of object permanence.
|
https://huggingface.co/papers/2305.03052
|
2023-05-05
|
2305.03040
| 1
|
TUVF: Learning Generalizable Texture UV Radiance Fields
|
Textures are a vital aspect of creating visually appealing and realistic 3D
models. In this paper, we study the problem of generating high-fidelity texture
given shapes of 3D assets, which has been relatively less explored compared
with generic 3D shape modeling. Our goal is to facilitate a controllable
texture generation process, such that one texture code can correspond to a
particular appearance style independent of any input shapes from a category. We
introduce Texture UV Radiance Fields (TUVF) that generate textures in a
learnable UV sphere space rather than directly on the 3D shape. This allows the
texture to be disentangled from the underlying shape and transferable to other
shapes that share the same UV space, i.e., from the same category. We integrate
the UV sphere space with the radiance field, which provides a more efficient
and accurate representation of textures than traditional texture maps. We
perform our experiments on real-world object datasets where we achieve not only
realistic synthesis but also substantial improvements over state-of-the-arts on
texture controlling and editing. Project Page: https://www.anjiecheng.me/TUVF
|
https://huggingface.co/papers/2305.03040
|
2023-05-05
|
2305.03027
| 1
|
NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads
|
We focus on reconstructing high-fidelity radiance fields of human heads,
capturing their animations over time, and synthesizing re-renderings from novel
viewpoints at arbitrary time steps. To this end, we propose a new multi-view
capture setup composed of 16 calibrated machine vision cameras that record
time-synchronized images at 7.1 MP resolution and 73 frames per second. With
our setup, we collect a new dataset of over 4700 high-resolution,
high-framerate sequences of more than 220 human heads, from which we introduce
a new human head reconstruction benchmark. The recorded sequences cover a wide
range of facial dynamics, including head motions, natural expressions,
emotions, and spoken language. In order to reconstruct high-fidelity human
heads, we propose Dynamic Neural Radiance Fields using Hash Ensembles
(NeRSemble). We represent scene dynamics by combining a deformation field and
an ensemble of 3D multi-resolution hash encodings. The deformation field allows
for precise modeling of simple scene movements, while the ensemble of hash
encodings helps to represent complex dynamics. As a result, we obtain radiance
field representations of human heads that capture motion over time and
facilitate re-rendering of arbitrary novel viewpoints. In a series of
experiments, we explore the design choices of our method and demonstrate that
our approach outperforms state-of-the-art dynamic radiance field approaches by
a significant margin.
|
https://huggingface.co/papers/2305.03027
|
2023-05-05
|
2305.02968
| 1
|
Masked Trajectory Models for Prediction, Representation, and Control
|
We introduce Masked Trajectory Models (MTM) as a generic abstraction for
sequential decision making. MTM takes a trajectory, such as a state-action
sequence, and aims to reconstruct the trajectory conditioned on random subsets
of the same trajectory. By training with a highly randomized masking pattern,
MTM learns versatile networks that can take on different roles or capabilities,
by simply choosing appropriate masks at inference time. For example, the same
MTM network can be used as a forward dynamics model, inverse dynamics model, or
even an offline RL agent. Through extensive experiments in several continuous
control tasks, we show that the same MTM network -- i.e. same weights -- can
match or outperform specialized networks trained for the aforementioned
capabilities. Additionally, we find that state representations learned by MTM
can significantly accelerate the learning speed of traditional RL algorithms.
Finally, in offline RL benchmarks, we find that MTM is competitive with
specialized offline RL algorithms, despite MTM being a generic self-supervised
learning method without any explicit RL components. Code is available at
https://github.com/facebookresearch/mtm
|
https://huggingface.co/papers/2305.02968
|
2023-05-05
|
2305.02790
| 1
|
BranchNorm: Robustly Scaling Extremely Deep Transformers
|
Recently, DeepNorm scales Transformers into extremely deep (i.e., 1000
layers) and reveals the promising potential of deep scaling. To stabilize the
training of deep models, DeepNorm (Wang et al., 2022) attempts to constrain the
model update to a constant value. Although applying such a constraint can
benefit the early stage of model training, it may lead to undertrained models
during the whole training procedure. In this paper, we propose BranchNorm,
which dynamically rescales the non-residual branch of Transformer in accordance
with the training period. BranchNorm not only theoretically stabilizes the
training with smooth gradient norms at the early stage, but also encourages
better convergence in the subsequent training stage. Experiment results on
multiple translation tasks demonstrate that BranchNorm achieves a better
trade-off between training stability and converge performance.
|
https://huggingface.co/papers/2305.02790
|
2023-05-05
|
2305.02678
| 1
|
Real-Time Neural Appearance Models
|
We present a complete system for real-time rendering of scenes with complex
appearance previously reserved for offline use. This is achieved with a
combination of algorithmic and system level innovations.
Our appearance model utilizes learned hierarchical textures that are
interpreted using neural decoders, which produce reflectance values and
importance-sampled directions. To best utilize the modeling capacity of the
decoders, we equip the decoders with two graphics priors. The first prior --
transformation of directions into learned shading frames -- facilitates
accurate reconstruction of mesoscale effects. The second prior -- a microfacet
sampling distribution -- allows the neural decoder to perform importance
sampling efficiently. The resulting appearance model supports anisotropic
sampling and level-of-detail rendering, and allows baking deeply layered
material graphs into a compact unified neural representation.
By exposing hardware accelerated tensor operations to ray tracing shaders, we
show that it is possible to inline and execute the neural decoders efficiently
inside a real-time path tracer. We analyze scalability with increasing number
of neural materials and propose to improve performance using code optimized for
coherent and divergent execution. Our neural material shaders can be over an
order of magnitude faster than non-neural layered materials. This opens up the
door for using film-quality visuals in real-time applications such as games and
live previews.
|
https://huggingface.co/papers/2305.02678
|
2023-05-05
|
2305.02440
| 1
|
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive
Transformer APIs
|
Large language models (LLMs) power many state-of-the-art systems in natural
language processing. However, these models are extremely computationally
expensive, even at inference time, raising the natural question: when is the
extra cost of deploying a larger model worth the anticipated boost in
capabilities? Better understanding this tradeoff fundamentally could benefit
from an inference efficiency metric that is both (i) easily comparable across
models from different providers, and (ii) representative of the true cost of
running queries in an isolated performance environment. Unfortunately, access
to LLMs today is largely restricted to black-box text generation APIs and raw
runtimes measured through this interface do not satisfy these desiderata: model
providers can apply various software and hardware optimizations orthogonal to
the model, and models served on shared infrastructure are susceptible to
performance contention. To circumvent these problems, we propose a new metric
for comparing inference efficiency across models. This metric puts models on
equal footing as though they were served (i) on uniform hardware and software,
and (ii) without performance contention. We call this metric the
idealized runtime, and we propose a methodology to efficiently estimate
this metric for autoregressive Transformer models. We also propose cost-aware
variants that incorporate the number of accelerators needed to serve the model.
Using these metrics, we compare ten state-of-the-art LLMs to provide the first
analysis of inference efficiency-capability tradeoffs; we make several
observations from this analysis, including the fact that the superior inference
runtime performance of certain APIs is often a byproduct of optimizations
within the API rather than the underlying model. Our methodology also
facilitates the efficient comparison of different software and hardware stacks.
|
https://huggingface.co/papers/2305.02440
|
2023-05-05
|
2305.02412
| 1
|
Plan, Eliminate, and Track -- Language Models are Good Teachers for
Embodied Agents
|
Pre-trained large language models (LLMs) capture procedural knowledge about
the world. Recent work has leveraged LLM's ability to generate abstract plans
to simplify challenging control tasks, either by action scoring, or action
modeling (fine-tuning). However, the transformer architecture inherits several
constraints that make it difficult for the LLM to directly serve as the agent:
e.g. limited input lengths, fine-tuning inefficiency, bias from pre-training,
and incompatibility with non-text environments. To maintain compatibility with
a low-level trainable actor, we propose to instead use the knowledge in LLMs to
simplify the control problem, rather than solving it. We propose the Plan,
Eliminate, and Track (PET) framework. The Plan module translates a task
description into a list of high-level sub-tasks. The Eliminate module masks out
irrelevant objects and receptacles from the observation for the current
sub-task. Finally, the Track module determines whether the agent has
accomplished each sub-task. On the AlfWorld instruction following benchmark,
the PET framework leads to a significant 15% improvement over SOTA for
generalization to human goal specifications.
|
https://huggingface.co/papers/2305.02412
|
2023-05-07
|
2305.03111
| 10
|
Can LLM Already Serve as A Database Interface? A BIg Bench for
Large-Scale Database Grounded Text-to-SQLs
|
Text-to-SQL parsing, which aims at converting natural language instructions
into executable SQLs, has gained increasing attention in recent years. In
particular, Codex and ChatGPT have shown impressive results in this task.
However, most of the prevalent benchmarks, i.e., Spider, and WikiSQL, focus on
database schema with few rows of database contents leaving the gap between
academic study and real-world applications. To mitigate this gap, we present
Bird, a big benchmark for large-scale database grounded in text-to-SQL tasks,
containing 12,751 pairs of text-to-SQL data and 95 databases with a total size
of 33.4 GB, spanning 37 professional domains. Our emphasis on database values
highlights the new challenges of dirty database contents, external knowledge
between NL questions and database contents, and SQL efficiency, particularly in
the context of massive databases. To solve these problems, text-to-SQL models
must feature database value comprehension in addition to semantic parsing. The
experimental results demonstrate the significance of database values in
generating accurate text-to-SQLs for big databases. Furthermore, even the most
effective text-to-SQL models, i.e. ChatGPT, only achieves 40.08% in execution
accuracy, which is still far from the human result of 92.96%, proving that
challenges still stand. Besides, we also provide an efficiency analysis to
offer insights into generating text-to-efficient-SQLs that are beneficial to
industries. We believe that BIRD will contribute to advancing real-world
applications of text-to-SQL research. The leaderboard and source code are
available: https://bird-bench.github.io/.
|
https://huggingface.co/papers/2305.03111
|
2023-05-07
|
2305.03726
| 6
|
Otter: A Multi-Modal Model with In-Context Instruction Tuning
|
Large language models (LLMs) have demonstrated significant universal
capabilities as few/zero-shot learners in various tasks due to their
pre-training on vast amounts of text data, as exemplified by GPT-3, which
boosted to InstrctGPT and ChatGPT, effectively following natural language
instructions to accomplish real-world tasks. In this paper, we propose to
introduce instruction tuning into multi-modal models, motivated by the Flamingo
model's upstream interleaved format pretraining dataset. We adopt a similar
approach to construct our MultI-Modal In-Context Instruction Tuning (MIMIC-IT)
dataset. We then introduce Otter, a multi-modal model based on OpenFlamingo
(open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and
showcasing improved instruction-following ability and in-context learning. We
also optimize OpenFlamingo's implementation for researchers, democratizing the
required training resources from 1times A100 GPU to 4times RTX-3090 GPUs,
and integrate both OpenFlamingo and Otter into Huggingface Transformers for
more researchers to incorporate the models into their customized training and
inference pipelines.
|
https://huggingface.co/papers/2305.03726
|
2023-05-07
|
2305.03695
| 4
|
Vera: A General-Purpose Plausibility Estimation Model for Commonsense
Statements
|
Despite the much discussed capabilities of today's language models, they are
still prone to silly and unexpected commonsense failures. We consider a
retrospective verification approach that reflects on the correctness of LM
outputs, and introduce Vera, a general-purpose model that estimates the
plausibility of declarative statements based on commonsense knowledge. Trained
on ~7M commonsense statements created from 19 QA datasets and two large-scale
knowledge bases, and with a combination of three training objectives, Vera is a
versatile model that effectively separates correct from incorrect statements
across diverse commonsense domains. When applied to solving commonsense
problems in the verification format, Vera substantially outperforms existing
models that can be repurposed for commonsense verification, and it further
exhibits generalization capabilities to unseen tasks and provides
well-calibrated outputs. We find that Vera excels at filtering LM-generated
commonsense knowledge and is useful in detecting erroneous commonsense
statements generated by models like ChatGPT in real-world settings.
|
https://huggingface.co/papers/2305.03695
|
2023-05-07
|
2305.03210
| 1
|
AttentionViz: A Global View of Transformer Attention
|
Transformer models are revolutionizing machine learning, but their inner
workings remain mysterious. In this work, we present a new visualization
technique designed to help researchers understand the self-attention mechanism
in transformers that allows these models to learn rich, contextual
relationships between elements of a sequence. The main idea behind our method
is to visualize a joint embedding of the query and key vectors used by
transformer models to compute attention. Unlike previous attention
visualization techniques, our approach enables the analysis of global patterns
across multiple input sequences. We create an interactive visualization tool,
AttentionViz, based on these joint query-key embeddings, and use it to study
attention mechanisms in both language and vision transformers. We demonstrate
the utility of our approach in improving model understanding and offering new
insights about query-key interactions through several application scenarios and
expert feedback.
|
https://huggingface.co/papers/2305.03210
|
2023-05-07
|
2305.03509
| 1
|
Diffusion Explainer: Visual Explanation for Text-to-image Stable
Diffusion
|
Diffusion-based generative models' impressive ability to create convincing
images has captured global attention. However, their complex internal
structures and operations often make them difficult for non-experts to
understand. We present Diffusion Explainer, the first interactive visualization
tool that explains how Stable Diffusion transforms text prompts into images.
Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's
complex components with detailed explanations of their underlying operations,
enabling users to fluidly transition between multiple levels of abstraction
through animations and interactive elements. By comparing the evolutions of
image representations guided by two related text prompts over refinement
timesteps, users can discover the impact of prompts on image generation.
Diffusion Explainer runs locally in users' web browsers without the need for
installation or specialized hardware, broadening the public's education access
to modern AI techniques. Our open-sourced tool is available at:
https://poloclub.github.io/diffusion-explainer/.
|
https://huggingface.co/papers/2305.03509
|
2023-05-07
|
2305.03514
| 1
|
Can Large Language Models Transform Computational Social Science?
|
Large Language Models (LLMs) like ChatGPT are capable of successfully
performing many language processing tasks zero-shot (without the need for
training data). If this capacity also applies to the coding of social phenomena
like persuasiveness and political ideology, then LLMs could effectively
transform Computational Social Science (CSS). This work provides a road map for
using LLMs as CSS tools. Towards this end, we contribute a set of prompting
best practices and an extensive evaluation pipeline to measure the zero-shot
performance of 13 language models on 24 representative CSS benchmarks. On
taxonomic labeling tasks (classification), LLMs fail to outperform the best
fine-tuned models but still achieve fair levels of agreement with humans. On
free-form coding tasks (generation), LLMs produce explanations that often
exceed the quality of crowdworkers' gold references. We conclude that today's
LLMs can radically augment the CSS research pipeline in two ways: (1) serving
as zero-shot data annotators on human annotation teams, and (2) bootstrapping
challenging creative generation tasks (e.g., explaining the hidden meaning
behind text). In summary, LLMs can significantly reduce costs and increase
efficiency of social science analysis in partnership with humans.
|
https://huggingface.co/papers/2305.03514
|
2023-05-07
|
2305.03719
| 0
|
Governance of the AI, by the AI, and for the AI
|
Over the past half century, there have been several false dawns during which
the "arrival" of world-changing artificial intelligence (AI) has been heralded.
Tempting fate, the authors believe the age of AI has, indeed, finally arrived.
Powerful image generators, such as DALL-E2 and Midjourney have suddenly allowed
anyone with access the ability easily to create rich and complex art. In a
similar vein, text generators, such as GPT3.5 (including ChatGPT) and BLOOM,
allow users to compose detailed written descriptions of many topics of
interest. And, it is even possible now for a person without extensive expertise
in writing software to use AI to generate code capable of myriad applications.
While AI will continue to evolve and improve, probably at a rapid rate, the
current state of AI is already ushering in profound changes to many different
sectors of society. Every new technology challenges the ability of humanity to
govern it wisely. However, governance is usually viewed as both possible and
necessary due to the disruption new technology often poses to social
structures, industries, the environment, and other important human concerns. In
this article, we offer an analysis of a range of interactions between AI and
governance, with the hope that wise decisions may be made that maximize
benefits and minimize costs. The article addresses two main aspects of this
relationship: the governance of AI by humanity, and the governance of humanity
by AI. The approach we have taken is itself informed by AI, as this article was
written collaboratively by the authors and ChatGPT.
|
https://huggingface.co/papers/2305.03719
|
2023-05-08
|
2305.04745
| 3
|
Controllable Light Diffusion for Portraits
|
We introduce light diffusion, a novel method to improve lighting in
portraits, softening harsh shadows and specular highlights while preserving
overall scene illumination. Inspired by professional photographers' diffusers
and scrims, our method softens lighting given only a single portrait photo.
Previous portrait relighting approaches focus on changing the entire lighting
environment, removing shadows (ignoring strong specular highlights), or
removing shading entirely. In contrast, we propose a learning based method that
allows us to control the amount of light diffusion and apply it on in-the-wild
portraits. Additionally, we design a method to synthetically generate plausible
external shadows with sub-surface scattering effects while conforming to the
shape of the subject's face. Finally, we show how our approach can increase the
robustness of higher level vision applications, such as albedo estimation,
geometry estimation and semantic segmentation.
|
https://huggingface.co/papers/2305.04745
|
2023-05-08
|
2305.04461
| 2
|
Locally Attentional SDF Diffusion for Controllable 3D Shape Generation
|
Although the recent rapid evolution of 3D generative neural networks greatly
improves 3D shape generation, it is still not convenient for ordinary users to
create 3D shapes and control the local geometry of generated shapes. To address
these challenges, we propose a diffusion-based 3D generation framework --
locally attentional SDF diffusion, to model plausible 3D shapes, via 2D sketch
image input. Our method is built on a two-stage diffusion model. The first
stage, named occupancy-diffusion, aims to generate a low-resolution occupancy
field to approximate the shape shell. The second stage, named SDF-diffusion,
synthesizes a high-resolution signed distance field within the occupied voxels
determined by the first stage to extract fine geometry. Our model is empowered
by a novel view-aware local attention mechanism for image-conditioned shape
generation, which takes advantage of 2D image patch features to guide 3D voxel
feature learning, greatly improving local controllability and model
generalizability. Through extensive experiments in sketch-conditioned and
category-conditioned 3D shape generation tasks, we validate and demonstrate the
ability of our method to provide plausible and diverse 3D shapes, as well as
its superior controllability and generalizability over existing work. Our code
and trained models are available at
https://zhengxinyang.github.io/projects/LAS-Diffusion.html
|
https://huggingface.co/papers/2305.04461
|
2023-05-08
|
2305.04160
| 2
|
X-LLM: Bootstrapping Advanced Large Language Models by Treating
Multi-Modalities as Foreign Languages
|
Large language models (LLMs) have demonstrated remarkable language abilities.
GPT-4, based on advanced LLMs, exhibits extraordinary multimodal capabilities
beyond previous visual language models. We attribute this to the use of more
advanced LLMs compared with previous multimodal models. Unfortunately, the
model architecture and training strategies of GPT-4 are unknown. To endow LLMs
with multimodal capabilities, we propose X-LLM, which converts Multi-modalities
(images, speech, videos) into foreign languages using X2L interfaces and inputs
them into a large Language model (ChatGLM). Specifically, X-LLM aligns multiple
frozen single-modal encoders and a frozen LLM using X2L interfaces, where ``X''
denotes multi-modalities such as image, speech, and videos, and ``L'' denotes
languages. X-LLM's training consists of three stages: (1) Converting Multimodal
Information: The first stage trains each X2L interface to align with its
respective single-modal encoder separately to convert multimodal information
into languages. (2) Aligning X2L representations with the LLM: single-modal
encoders are aligned with the LLM through X2L interfaces independently. (3)
Integrating multiple modalities: all single-modal encoders are aligned with the
LLM through X2L interfaces to integrate multimodal capabilities into the LLM.
Our experiments show that X-LLM demonstrates impressive multimodel chat
abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen
images/instructions, and yields a 84.5\% relative score compared with GPT-4 on
a synthetic multimodal instruction-following dataset. And we also conduct
quantitative tests on using LLM for ASR and multimodal ASR, hoping to promote
the era of LLM-based speech recognition.
|
https://huggingface.co/papers/2305.04160
|
2023-05-08
|
2305.03689
| 2
|
COLA: How to adapt vision-language models to Compose Objects Localized
with Attributes?
|
Compositional reasoning is a hallmark of human visual intelligence; yet
despite the size of large vision-language models, they struggle to represent
simple compositions by combining objects with their attributes. To measure this
lack of compositional capability, we design Cola, a text-to-image retrieval
benchmark to Compose Objects Localized with Attributes. Using Cola as a
testbed, we explore modeling designs to adapt pre-trained vision-language
models to reason compositionally about multiple attributes attached to multiple
objects. We explore 6 finetuning strategies on 2 seminal vision-language
models, using 3 finetuning datasets and 2 test benchmarks (Cola and CREPE).
Surprisingly, our optimal finetuning strategy improves a 151M parameter CLIP,
which disjointly encodes image and language during pretraining, to perform as
well as a 241M parameter FLAVA, which uses a multi-modal transformer encoder
during pretraining to attend over both vision and language modalities. This
optimal finetuning strategy is a lightweight multi-modal adapter that jointly
attends over both image and language features generated by the pretrained
model. We show this works better than common strategies such as
prompt/fine-tuning, or tuning a comparable number of unimodal layers.
|
https://huggingface.co/papers/2305.03689
|
2023-05-08
|
2305.04391
| 1
|
A Variational Perspective on Solving Inverse Problems with Diffusion
Models
|
Diffusion models have emerged as a key pillar of foundation models in visual
domains. One of their critical applications is to universally solve different
downstream inverse tasks via a single diffusion prior without re-training for
each task. Most inverse tasks can be formulated as inferring a posterior
distribution over data (e.g., a full image) given a measurement (e.g., a masked
image). This is however challenging in diffusion models since the nonlinear and
iterative nature of the diffusion process renders the posterior intractable. To
cope with this challenge, we propose a variational approach that by design
seeks to approximate the true posterior distribution. We show that our approach
naturally leads to regularization by denoising diffusion process (RED-Diff)
where denoisers at different timesteps concurrently impose different structural
constraints over the image. To gauge the contribution of denoisers from
different timesteps, we propose a weighting mechanism based on
signal-to-noise-ratio (SNR). Our approach provides a new variational
perspective for solving inverse problems with diffusion models, allowing us to
formulate sampling as stochastic optimization, where one can simply apply
off-the-shelf solvers with lightweight iterates. Our experiments for image
restoration tasks such as inpainting and superresolution demonstrate the
strengths of our method compared with state-of-the-art sampling-based diffusion
models.
|
https://huggingface.co/papers/2305.04391
|
2023-05-08
|
2305.03713
| 1
|
Avatar Fingerprinting for Authorized Use of Synthetic Talking-Head
Videos
|
Modern generators render talking-head videos with impressive levels of
photorealism, ushering in new user experiences such as videoconferencing under
constrained bandwidth budgets. Their safe adoption, however, requires a
mechanism to verify if the rendered video is trustworthy. For instance, for
videoconferencing we must identify cases in which a synthetic video portrait
uses the appearance of an individual without their consent. We term this task
avatar fingerprinting. We propose to tackle it by leveraging facial motion
signatures unique to each person. Specifically, we learn an embedding in which
the motion signatures of one identity are grouped together, and pushed away
from those of other identities, regardless of the appearance in the synthetic
video. Avatar fingerprinting algorithms will be critical as talking head
generators become more ubiquitous, and yet no large scale datasets exist for
this new task. Therefore, we contribute a large dataset of people delivering
scripted and improvised short monologues, accompanied by synthetic videos in
which we render videos of one person using the facial appearance of another.
Project page: https://research.nvidia.com/labs/nxp/avatar-fingerprinting/.
|
https://huggingface.co/papers/2305.03713
|
2023-05-08
|
2305.03668
| 1
|
A Suite of Generative Tasks for Multi-Level Multimodal Webpage
Understanding
|
Webpages have been a rich, scalable resource for vision-language and language
only tasks. Yet only pieces of webpages are kept in existing datasets:
image-caption pairs, long text articles, or raw HTML, never all in one place.
Webpage tasks have resultingly received little attention and structured
image-text data left underused. To study multimodal webpage understanding, we
introduce the Wikipedia Webpage suite (WikiWeb2M) containing 2M pages with all
of the associated image, text, and structure data. We verify its utility on
three generative tasks: page description generation, section summarization, and
contextual image captioning. We design a novel attention mechanism Prefix
Global, which selects the most relevant image and text content as global tokens
to attend to the rest of the webpage for context. By using page structure to
separate such tokens, it performs better than full attention with lower
computational complexity. Extensive experiments show that the new data in
WikiWeb2M improves task performance compared to prior work.
|
https://huggingface.co/papers/2305.03668
|
2023-05-08
|
2305.03286
| 1
|
Composite Motion Learning with Task Control
|
We present a deep learning method for composite and task-driven motion
control for physically simulated characters. In contrast to existing
data-driven approaches using reinforcement learning that imitate full-body
motions, we learn decoupled motions for specific body parts from multiple
reference motions simultaneously and directly by leveraging the use of multiple
discriminators in a GAN-like setup. In this process, there is no need of any
manual work to produce composite reference motions for learning. Instead, the
control policy explores by itself how the composite motions can be combined
automatically. We further account for multiple task-specific rewards and train
a single, multi-objective control policy. To this end, we propose a novel
framework for multi-objective learning that adaptively balances the learning of
disparate motions from multiple sources and multiple goal-directed control
objectives. In addition, as composite motions are typically augmentations of
simpler behaviors, we introduce a sample-efficient method for training
composite control policies in an incremental manner, where we reuse a
pre-trained policy as the meta policy and train a cooperative policy that
adapts the meta one for new composite tasks. We show the applicability of our
approach on a variety of challenging multi-objective tasks involving both
composite motion imitation and multiple goal-directed control.
|
https://huggingface.co/papers/2305.03286
|
2023-05-09
|
2305.05176
| 6
|
FrugalGPT: How to Use Large Language Models While Reducing Cost and
Improving Performance
|
There is a rapidly growing number of large language models (LLMs) that users
can query for a fee. We review the cost associated with querying popular LLM
APIs, e.g. GPT-4, ChatGPT, J1-Jumbo, and find that these models have
heterogeneous pricing structures, with fees that can differ by two orders of
magnitude. In particular, using LLMs on large collections of queries and text
can be expensive. Motivated by this, we outline and discuss three types of
strategies that users can exploit to reduce the inference cost associated with
using LLMs: 1) prompt adaptation, 2) LLM approximation, and 3) LLM cascade. As
an example, we propose FrugalGPT, a simple yet flexible instantiation of LLM
cascade which learns which combinations of LLMs to use for different queries in
order to reduce cost and improve accuracy. Our experiments show that FrugalGPT
can match the performance of the best individual LLM (e.g. GPT-4) with up to
98% cost reduction or improve the accuracy over GPT-4 by 4% with the same cost.
The ideas and findings presented here lay a foundation for using LLMs
sustainably and efficiently.
|
https://huggingface.co/papers/2305.05176
|
2023-05-09
|
2305.05644
| 5
|
Towards Building the Federated GPT: Federated Instruction Tuning
|
While ``instruction-tuned" generative large language models (LLMs) have
demonstrated an impressive ability to generalize to new tasks, the training
phases heavily rely on large amounts of diverse and high-quality instruction
data (such as ChatGPT and GPT-4). Unfortunately, acquiring high-quality data,
especially when it comes to human-written data, can pose significant challenges
both in terms of cost and accessibility. Moreover, concerns related to privacy
can further limit access to such data, making the process of obtaining it a
complex and nuanced undertaking. Consequently, this hinders the generality of
the tuned models and may restrict their effectiveness in certain contexts. To
tackle this issue, our study introduces a new approach called Federated
Instruction Tuning (FedIT), which leverages federated learning (FL) as the
learning framework for the instruction tuning of LLMs. This marks the first
exploration of FL-based instruction tuning for LLMs. This is especially
important since text data is predominantly generated by end users. Therefore,
it is imperative to design and adapt FL approaches to effectively leverage
these users' diverse instructions stored on local devices, while preserving
privacy and ensuring data security. In the current paper, by conducting widely
used GPT-4 auto-evaluation, we demonstrate that by exploiting the heterogeneous
and diverse sets of instructions on the client's end with the proposed
framework FedIT, we improved the performance of LLMs compared to centralized
training with only limited local instructions. Further, in this paper, we
developed a Github repository named Shepherd. This repository offers a
foundational framework for exploring federated fine-tuning of LLMs using
heterogeneous instructions across diverse categories.
|
https://huggingface.co/papers/2305.05644
|
2023-05-09
|
2305.05662
| 4
|
InternChat: Solving Vision-Centric Tasks by Interacting with Chatbots
Beyond Language
|
We present an interactive visual framework named InternChat, or iChat for
short. The framework integrates chatbots that have planning and reasoning
capabilities, such as ChatGPT, with non-verbal instructions like pointing
movements that enable users to directly manipulate images or videos on the
screen. Pointing (including gestures, cursors, etc.) movements can provide more
flexibility and precision in performing vision-centric tasks that require
fine-grained control, editing, and generation of visual content. The name
InternChat stands for interaction, nonverbal, and chatbots. Different from
existing interactive systems that rely on pure language, by incorporating
pointing instructions, the proposed iChat significantly improves the efficiency
of communication between users and chatbots, as well as the accuracy of
chatbots in vision-centric tasks, especially in complicated visual scenarios
where the number of objects is greater than 2. Additionally, in iChat, an
auxiliary control mechanism is used to improve the control capability of LLM,
and a large vision-language model termed Husky is fine-tuned for high-quality
multi-modal dialogue (impressing ChatGPT-3.5-turbo with 93.89% GPT-4 Quality).
We hope this work can spark new ideas and directions for future interactive
visual systems. Welcome to watch the code at
https://github.com/OpenGVLab/InternChat.
|
https://huggingface.co/papers/2305.05662
|
2023-05-09
|
2305.04091
| 3
|
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning
by Large Language Models
|
Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and improve their reasoning task accuracy. To eliminate the
manual effort, Zero-shot-CoT concatenates the target problem statement with
"Let's think step by step" as an input prompt to LLMs. Despite the success of
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
missing-step errors, and semantic misunderstanding errors. To address the
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
two components: first, devising a plan to divide the entire task into smaller
subtasks, and then carrying out the subtasks according to the plan. To address
the calculation errors and improve the quality of generated reasoning steps, we
extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three
reasoning problems. The experimental results over GPT-3 show that our proposed
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
Prompting, and has comparable performance with 8-shot CoT prompting on the math
reasoning problem. The code can be found at
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
|
https://huggingface.co/papers/2305.04091
|
2023-05-09
|
2305.05189
| 2
|
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with
Large Language Models
|
Diffusion models, which have emerged to become popular text-to-image
generation models, can produce high-quality and content-rich images guided by
textual prompts. However, there are limitations to semantic understanding and
commonsense reasoning in existing models when the input prompts are concise
narrative, resulting in low-quality image generation. To improve the capacities
for narrative prompts, we propose a simple-yet-effective parameter-efficient
fine-tuning approach called the Semantic Understanding and Reasoning adapter
(SUR-adapter) for pre-trained diffusion models. To reach this goal, we first
collect and annotate a new dataset SURD which consists of more than 57,000
semantically corrected multi-modal samples. Each sample contains a simple
narrative prompt, a complex keyword-based prompt, and a high-quality image.
Then, we align the semantic representation of narrative prompts to the complex
prompts and transfer knowledge of large language models (LLMs) to our
SUR-adapter via knowledge distillation so that it can acquire the powerful
semantic understanding and reasoning capabilities to build a high-quality
textual semantic representation for text-to-image generation. We conduct
experiments by integrating multiple LLMs and popular pre-trained diffusion
models to show the effectiveness of our approach in enabling diffusion models
to understand and reason concise natural language without image quality
degradation. Our approach can make text-to-image diffusion models easier to use
with better user experience, which demonstrates our approach has the potential
for further advancing the development of user-friendly text-to-image generation
models by bridging the semantic gap between simple narrative prompts and
complex keyword-based prompts.
|
https://huggingface.co/papers/2305.05189
|
2023-05-09
|
2305.03937
| 2
|
Residual Prompt Tuning: Improving Prompt Tuning with Residual
Reparameterization
|
Prompt tuning is one of the successful approaches for parameter-efficient
tuning of pre-trained language models. Despite being arguably the most
parameter-efficient (tuned soft prompts constitute <0.1% of total parameters),
it typically performs worse than other efficient tuning methods and is quite
sensitive to hyper-parameters. In this work, we introduce Residual Prompt
Tuning - a simple and efficient method that significantly improves the
performance and stability of prompt tuning. We propose to reparameterize soft
prompt embeddings using a shallow network with a residual connection. Our
experiments show that Residual Prompt Tuning significantly outperforms prompt
tuning on SuperGLUE benchmark. Notably, our method reaches +7 points
improvement over prompt tuning with T5-Base and allows to reduce the prompt
length by 10x without hurting performance. In addition, we show that our
approach is robust to the choice of learning rate and prompt initialization,
and is effective in few-shot settings.
|
https://huggingface.co/papers/2305.03937
|
2023-05-09
|
2305.04790
| 1
|
MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
|
We present a vision and language model named MultiModal-GPT to conduct
multi-round dialogue with humans. MultiModal-GPT can follow various
instructions from humans, such as generating a detailed caption, counting the
number of interested objects, and answering general questions from users.
MultiModal-GPT is parameter-efficiently fine-tuned from OpenFlamingo, with
Low-rank Adapter (LoRA) added both in the cross-attention part and the
self-attention part of the language model. We first construct instruction
templates with vision and language data for multi-modality instruction tuning
to make the model understand and follow human instructions. We find the quality
of training data is vital for the dialogue performance, where few data
containing short answers can lead the model to respond shortly to any
instructions. To further enhance the ability to chat with humans of the
MultiModal-GPT, we utilize language-only instruction-following data to train
the MultiModal-GPT jointly. The joint training of language-only and
visual-language instructions with the \emph{same} instruction template
effectively improves dialogue performance. Various demos show the ability of
continuous dialogue of MultiModal-GPT with humans. Code, dataset, and demo are
at https://github.com/open-mmlab/Multimodal-GPT
|
https://huggingface.co/papers/2305.04790
|
2023-05-09
|
2305.04789
| 1
|
AvatarReX: Real-time Expressive Full-body Avatars
|
We present AvatarReX, a new method for learning NeRF-based full-body avatars
from video data. The learnt avatar not only provides expressive control of the
body, hands and the face together, but also supports real-time animation and
rendering. To this end, we propose a compositional avatar representation, where
the body, hands and the face are separately modeled in a way that the
structural prior from parametric mesh templates is properly utilized without
compromising representation flexibility. Furthermore, we disentangle the
geometry and appearance for each part. With these technical designs, we propose
a dedicated deferred rendering pipeline, which can be executed in real-time
framerate to synthesize high-quality free-view images. The disentanglement of
geometry and appearance also allows us to design a two-pass training strategy
that combines volume rendering and surface rendering for network training. In
this way, patch-level supervision can be applied to force the network to learn
sharp appearance details on the basis of geometry estimation. Overall, our
method enables automatic construction of expressive full-body avatars with
real-time rendering capability, and can generate photo-realistic images with
dynamic details for novel body motions and facial expressions.
|
https://huggingface.co/papers/2305.04789
|
2023-05-09
|
2305.04388
| 1
|
Language Models Don't Always Say What They Think: Unfaithful
Explanations in Chain-of-Thought Prompting
|
Large Language Models (LLMs) can achieve strong performance on many tasks by
producing step-by-step reasoning before giving a final output, often referred
to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT
explanations as the LLM's process for solving a task. However, we find that CoT
explanations can systematically misrepresent the true reason for a model's
prediction. We demonstrate that CoT explanations can be heavily influenced by
adding biasing features to model inputs -- e.g., by reordering the
multiple-choice options in a few-shot prompt to make the answer always "(A)" --
which models systematically fail to mention in their explanations. When we bias
models toward incorrect answers, they frequently generate CoT explanations
supporting those answers. This causes accuracy to drop by as much as 36% on a
suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI
and Claude 1.0 from Anthropic. On a social-bias task, model explanations
justify giving answers in line with stereotypes without mentioning the
influence of these social biases. Our findings indicate that CoT explanations
can be plausible yet misleading, which risks increasing our trust in LLMs
without guaranteeing their safety. CoT is promising for explainability, but our
results highlight the need for targeted efforts to evaluate and improve
explanation faithfulness.
|
https://huggingface.co/papers/2305.04388
|
2023-05-09
|
2305.04268
| 1
|
Multi-Space Neural Radiance Fields
|
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of
reflective objects, often resulting in blurry or distorted rendering. Instead
of calculating a single radiance field, we propose a multi-space neural
radiance field (MS-NeRF) that represents the scene using a group of feature
fields in parallel sub-spaces, which leads to a better understanding of the
neural network toward the existence of reflective and refractive objects. Our
multi-space scheme works as an enhancement to existing NeRF methods, with only
small computational overheads needed for training and inferring the extra-space
outputs. We demonstrate the superiority and compatibility of our approach using
three representative NeRF-based models, i.e., NeRF, Mip-NeRF, and Mip-NeRF 360.
Comparisons are performed on a novelly constructed dataset consisting of 25
synthetic scenes and 7 real captured scenes with complex reflection and
refraction, all having 360-degree viewpoints. Extensive experiments show that
our approach significantly outperforms the existing single-space NeRF methods
for rendering high-quality scenes concerned with complex light paths through
mirror-like objects. Our code and dataset will be publicly available at
https://zx-yin.github.io/msnerf.
|
https://huggingface.co/papers/2305.04268
|
2023-05-09
|
2305.04241
| 1
|
Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing
Important Tokens
|
Transformer models are foundational to natural language processing (NLP) and
computer vision. Despite various recent works devoted to reducing the quadratic
cost of such models (as a function of the sequence length n), dealing with
ultra long sequences efficiently (e.g., with more than 16K tokens) remains
challenging. Applications such as answering questions based on an entire book
or summarizing a scientific article are inefficient or infeasible. In this
paper, we propose to significantly reduce the dependency of a Transformer
model's complexity on n, by compressing the input into a representation whose
size r is independent of n at each layer. Specifically, by exploiting the
fact that in many tasks, only a small subset of special tokens (we call
VIP-tokens) are most relevant to the final prediction, we propose a VIP-token
centric compression (Vcc) scheme which selectively compresses the input
sequence based on their impact on approximating the representation of these
VIP-tokens. Compared with competitive baselines, the proposed algorithm not
only is efficient (achieving more than 3times efficiency improvement
compared to baselines on 4K and 16K lengths), but also achieves competitive or
better performance on a large number of tasks. Further, we show that our
algorithm can be scaled to 128K tokens (or more) while consistently offering
accuracy improvement.
|
https://huggingface.co/papers/2305.04241
|
2023-05-09
|
2305.03981
| 1
|
Pre-training Language Model as a Multi-perspective Course Learner
|
ELECTRA, the generator-discriminator pre-training framework, has achieved
impressive semantic construction capability among various downstream tasks.
Despite the convincing performance, ELECTRA still faces the challenges of
monotonous training and deficient interaction. Generator with only masked
language modeling (MLM) leads to biased learning and label imbalance for
discriminator, decreasing learning efficiency; no explicit feedback loop from
discriminator to generator results in the chasm between these two components,
underutilizing the course learning. In this study, a multi-perspective course
learning (MCL) method is proposed to fetch a many degrees and visual angles for
sample-efficient pre-training, and to fully leverage the relationship between
generator and discriminator. Concretely, three self-supervision courses are
designed to alleviate inherent flaws of MLM and balance the label in a
multi-perspective way. Besides, two self-correction courses are proposed to
bridge the chasm between the two encoders by creating a "correction notebook"
for secondary-supervision. Moreover, a course soups trial is conducted to solve
the "tug-of-war" dynamics problem of MCL, evolving a stronger pre-trained
model. Experimental results show that our method significantly improves
ELECTRA's average performance by 2.8% and 3.2% absolute points respectively on
GLUE and SQuAD 2.0 benchmarks, and overshadows recent advanced ELECTRA-style
models under the same settings. The pre-trained MCL model is available at
https://huggingface.co/McmanusChen/MCL-base.
|
https://huggingface.co/papers/2305.03981
|
2023-05-10
|
2305.05065
| 7
|
Recommender Systems with Generative Retrieval
|
Modern recommender systems leverage large-scale retrieval models consisting
of two stages: training a dual-encoder model to embed queries and candidates in
the same space, followed by an Approximate Nearest Neighbor (ANN) search to
select top candidates given a query's embedding. In this paper, we propose a
new single-stage paradigm: a generative retrieval model which autoregressively
decodes the identifiers for the target candidates in one phase. To do this,
instead of assigning randomly generated atomic IDs to each item, we generate
Semantic IDs: a semantically meaningful tuple of codewords for each item that
serves as its unique identifier. We use a hierarchical method called RQ-VAE to
generate these codewords. Once we have the Semantic IDs for all the items, a
Transformer based sequence-to-sequence model is trained to predict the Semantic
ID of the next item. Since this model predicts the tuple of codewords
identifying the next item directly in an autoregressive manner, it can be
considered a generative retrieval model. We show that our recommender system
trained in this new paradigm improves the results achieved by current SOTA
models on the Amazon dataset. Moreover, we demonstrate that the
sequence-to-sequence model coupled with hierarchical Semantic IDs offers better
generalization and hence improves retrieval of cold-start items for
recommendations.
|
https://huggingface.co/papers/2305.05065
|
2023-05-10
|
2304.09355
| 5
|
To Compress or Not to Compress- Self-Supervised Learning and Information
Theory: A Review
|
Deep neural networks have demonstrated remarkable performance in supervised
learning tasks but require large amounts of labeled data. Self-supervised
learning offers an alternative paradigm, enabling the model to learn from data
without explicit labels. Information theory has been instrumental in
understanding and optimizing deep neural networks. Specifically, the
information bottleneck principle has been applied to optimize the trade-off
between compression and relevant information preservation in supervised
settings. However, the optimal information objective in self-supervised
learning remains unclear. In this paper, we review various approaches to
self-supervised learning from an information-theoretic standpoint and present a
unified framework that formalizes the self-supervised information-theoretic
learning problem. We integrate existing research into a coherent framework,
examine recent self-supervised methods, and identify research opportunities and
challenges. Moreover, we discuss empirical measurement of information-theoretic
quantities and their estimators. This paper offers a comprehensive review of
the intersection between information theory, self-supervised learning, and deep
neural networks.
|
https://huggingface.co/papers/2304.09355
|
2023-05-10
|
2305.05862
| 4
|
Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text
Analytics? An Examination on Several Typical Tasks
|
The most recent large language models such as ChatGPT and GPT-4 have garnered
significant attention, as they are capable of generating high-quality responses
to human input. Despite the extensive testing of ChatGPT and GPT-4 on generic
text corpora, showcasing their impressive capabilities, a study focusing on
financial corpora has not been conducted. In this study, we aim to bridge this
gap by examining the potential of ChatGPT and GPT-4 as a solver for typical
financial text analytic problems in the zero-shot or few-shot setting.
Specifically, we assess their capabilities on four representative tasks over
five distinct financial textual datasets. The preliminary study shows that
ChatGPT and GPT-4 struggle on tasks such as financial named entity recognition
(NER) and sentiment analysis, where domain-specific knowledge is required,
while they excel in numerical reasoning tasks. We report both the strengths and
limitations of the current versions of ChatGPT and GPT-4, comparing them to the
state-of-the-art finetuned models as well as pretrained domain-specific
generative models. Our experiments provide qualitative studies, through which
we hope to help understand the capability of the existing models and facilitate
further improvements.
|
https://huggingface.co/papers/2305.05862
|
2023-05-10
|
2305.05591
| 3
|
AudioSlots: A slot-centric generative model for audio separation
|
In a range of recent works, object-centric architectures have been shown to
be suitable for unsupervised scene decomposition in the vision domain. Inspired
by these methods we present AudioSlots, a slot-centric generative model for
blind source separation in the audio domain. AudioSlots is built using
permutation-equivariant encoder and decoder networks. The encoder network based
on the Transformer architecture learns to map a mixed audio spectrogram to an
unordered set of independent source embeddings. The spatial broadcast decoder
network learns to generate the source spectrograms from the source embeddings.
We train the model in an end-to-end manner using a permutation invariant loss
function. Our results on Libri2Mix speech separation constitute a proof of
concept that this approach shows promise. We discuss the results and
limitations of our approach in detail, and further outline potential ways to
overcome the limitations and directions for future work.
|
https://huggingface.co/papers/2305.05591
|
2023-05-10
|
2305.06077
| 2
|
Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models
|
Following the remarkable success of diffusion models on image generation,
recent works have also demonstrated their impressive ability to address a
number of inverse problems in an unsupervised way, by properly constraining the
sampling process based on a conditioning input. Motivated by this, in this
paper, we present the first approach to use diffusion models as a prior for
highly accurate 3D facial BRDF reconstruction from a single image. We start by
leveraging a high-quality UV dataset of facial reflectance (diffuse and
specular albedo and normals), which we render under varying illumination
settings to simulate natural RGB textures and, then, train an unconditional
diffusion model on concatenated pairs of rendered textures and reflectance
components. At test time, we fit a 3D morphable model to the given image and
unwrap the face in a partial UV texture. By sampling from the diffusion model,
while retaining the observed texture part intact, the model inpaints not only
the self-occluded areas but also the unknown reflectance components, in a
single sequence of denoising steps. In contrast to existing methods, we
directly acquire the observed texture from the input image, thus, resulting in
more faithful and consistent reflectance estimation. Through a series of
qualitative and quantitative comparisons, we demonstrate superior performance
in both texture completion as well as reflectance reconstruction tasks.
|
https://huggingface.co/papers/2305.06077
|
2023-05-10
|
2305.05845
| 2
|
Sketching the Future (STF): Applying Conditional Control Techniques to
Text-to-Video Models
|
The proliferation of video content demands efficient and flexible neural
network based approaches for generating new video content. In this paper, we
propose a novel approach that combines zero-shot text-to-video generation with
ControlNet to improve the output of these models. Our method takes multiple
sketched frames as input and generates video output that matches the flow of
these frames, building upon the Text-to-Video Zero architecture and
incorporating ControlNet to enable additional input conditions. By first
interpolating frames between the inputted sketches and then running
Text-to-Video Zero using the new interpolated frames video as the control
technique, we leverage the benefits of both zero-shot text-to-video generation
and the robust control provided by ControlNet. Experiments demonstrate that our
method excels at producing high-quality and remarkably consistent video content
that more accurately aligns with the user's intended motion for the subject
within the video. We provide a comprehensive resource package, including a demo
video, project website, open-source GitHub repository, and a Colab playground
to foster further research and application of our proposed method.
|
https://huggingface.co/papers/2305.05845
|
2023-05-10
|
2305.05658
| 2
|
TidyBot: Personalized Robot Assistance with Large Language Models
|
For a robot to personalize physical assistance effectively, it must learn
user preferences that can be generally reapplied to future scenarios. In this
work, we investigate personalization of household cleanup with robots that can
tidy up rooms by picking up objects and putting them away. A key challenge is
determining the proper place to put each object, as people's preferences can
vary greatly depending on personal taste or cultural background. For instance,
one person may prefer storing shirts in the drawer, while another may prefer
them on the shelf. We aim to build systems that can learn such preferences from
just a handful of examples via prior interactions with a particular person. We
show that robots can combine language-based planning and perception with the
few-shot summarization capabilities of large language models (LLMs) to infer
generalized user preferences that are broadly applicable to future
interactions. This approach enables fast adaptation and achieves 91.2% accuracy
on unseen objects in our benchmark dataset. We also demonstrate our approach on
a real-world mobile manipulator called TidyBot, which successfully puts away
85.0% of objects in real-world test scenarios.
|
https://huggingface.co/papers/2305.05658
|
2023-05-10
|
2305.05364
| 2
|
Large Language Model Programs
|
In recent years, large pre-trained language models (LLMs) have demonstrated
the ability to follow instructions and perform novel tasks from a few examples.
The possibility to parameterise an LLM through such in-context examples widens
their capability at a much lower cost than finetuning. We extend this line of
reasoning and present a method which further expands the capabilities of an LLM
by embedding it within an algorithm or program. To demonstrate the benefits of
this approach, we present an illustrative example of evidence-supported
question-answering. We obtain a 6.4\% improvement over the chain of thought
baseline through a more algorithmic approach without any finetuning.
Furthermore, we highlight recent work from this perspective and discuss the
advantages and disadvantages in comparison to the standard approaches.
|
https://huggingface.co/papers/2305.05364
|
2023-05-10
|
2305.04966
| 2
|
NerfAcc: Efficient Sampling Accelerates NeRFs
|
Optimizing and rendering Neural Radiance Fields is computationally expensive
due to the vast number of samples required by volume rendering. Recent works
have included alternative sampling approaches to help accelerate their methods,
however, they are often not the focus of the work. In this paper, we
investigate and compare multiple sampling approaches and demonstrate that
improved sampling is generally applicable across NeRF variants under an unified
concept of transmittance estimator. To facilitate future experiments, we
develop NerfAcc, a Python toolbox that provides flexible APIs for incorporating
advanced sampling methods into NeRF related methods. We demonstrate its
flexibility by showing that it can reduce the training time of several recent
NeRF methods by 1.5x to 20x with minimal modifications to the existing
codebase. Additionally, highly customized NeRFs, such as Instant-NGP, can be
implemented in native PyTorch using NerfAcc.
|
https://huggingface.co/papers/2305.04966
|
2023-05-10
|
2305.05383
| 2
|
Code Execution with Pre-trained Language Models
|
Code execution is a fundamental aspect of programming language semantics that
reflects the exact behavior of the code. However, most pre-trained models for
code intelligence ignore the execution trace and only rely on source code and
syntactic structures. In this paper, we investigate how well pre-trained models
can understand and perform code execution. We develop a mutation-based data
augmentation technique to create a large-scale and realistic Python dataset and
task for code execution, which challenges existing models such as Codex. We
then present CodeExecutor, a Transformer model that leverages code execution
pre-training and curriculum learning to enhance its semantic comprehension. We
evaluate CodeExecutor on code execution and show its promising performance and
limitations. We also demonstrate its potential benefits for code intelligence
tasks such as zero-shot code-to-code search and text-to-code generation. Our
analysis provides insights into the learning and generalization abilities of
pre-trained models for code execution.
|
https://huggingface.co/papers/2305.05383
|
2023-05-10
|
2305.05432
| 1
|
WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset
|
Webpages have been a rich resource for language and vision-language tasks.
Yet only pieces of webpages are kept: image-caption pairs, long text articles,
or raw HTML, never all in one place. Webpage tasks have resultingly received
little attention and structured image-text data underused. To study multimodal
webpage understanding, we introduce the Wikipedia Webpage 2M (WikiWeb2M) suite;
the first to retain the full set of images, text, and structure data available
in a page. WikiWeb2M can be used for tasks like page description generation,
section summarization, and contextual image captioning.
|
https://huggingface.co/papers/2305.05432
|
2023-05-11
|
2305.06161
| 31
|
StarCoder: may the source be with you!
|
The BigCode community, an open-scientific collaboration working on the
responsible development of Large Language Models for Code (Code LLMs),
introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context
length, infilling capabilities and fast large-batch inference enabled by
multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced
from The Stack, a large collection of permissively licensed GitHub repositories
with inspection tools and an opt-out process. We fine-tuned StarCoderBase on
35B Python tokens, resulting in the creation of StarCoder. We perform the most
comprehensive evaluation of Code LLMs to date and show that StarCoderBase
outperforms every open Code LLM that supports multiple programming languages
and matches or outperforms the OpenAI code-cushman-001 model. Furthermore,
StarCoder outperforms every model that is fine-tuned on Python, can be prompted
to achieve 40\% pass@1 on HumanEval, and still retains its performance on other
programming languages. We take several important steps towards a safe
open-access model release, including an improved PII redaction pipeline and a
novel attribution tracing tool, and make the StarCoder models publicly
available under a more commercially viable version of the Open Responsible AI
Model license.
|
https://huggingface.co/papers/2305.06161
|
2023-05-11
|
2305.06355
| 3
|
VideoChat: Chat-Centric Video Understanding
|
In this study, we initiate an exploration into video understanding by
introducing VideoChat, an end-to-end chat-centric video understanding system.
It integrates video foundation models and large language models via a learnable
neural interface, excelling in spatiotemporal reasoning, event localization,
and causal relationship inference. To instructively tune this system, we
propose a video-centric instruction dataset, composed of thousands of videos
matched with detailed descriptions and conversations. This dataset emphasizes
spatiotemporal reasoning and causal relationships, providing a valuable asset
for training chat-centric video understanding systems. Preliminary qualitative
experiments reveal our system's potential across a broad spectrum of video
applications and set the standard for future research. Access our code and data
at https://github.com/OpenGVLab/Ask-Anything
|
https://huggingface.co/papers/2305.06355
|
2023-05-11
|
2305.06131
| 2
|
Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era
|
Generative AI (AIGC, a.k.a. AI generated content) has made remarkable
progress in the past few years, among which text-guided content generation is
the most practical one since it enables the interaction between human
instruction and AIGC. Due to the development in text-to-image as well 3D
modeling technologies (like NeRF), text-to-3D has become a newly emerging yet
highly active research field. Our work conducts the first yet comprehensive
survey on text-to-3D to help readers interested in this direction quickly catch
up with its fast development. First, we introduce 3D data representations,
including both Euclidean data and non-Euclidean data. On top of that, we
introduce various foundation technologies as well as summarize how recent works
combine those foundation technologies to realize satisfactory text-to-3D.
Moreover, we summarize how text-to-3D technology is used in various
applications, including avatar generation, texture generation, shape
transformation, and scene generation.
|
https://huggingface.co/papers/2305.06131
|
2023-05-11
|
2305.06356
| 1
|
HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion
|
Representing human performance at high-fidelity is an essential building
block in diverse applications, such as film production, computer games or
videoconferencing. To close the gap to production-level quality, we introduce
HumanRF, a 4D dynamic neural scene representation that captures full-body
appearance in motion from multi-view video input, and enables playback from
novel, unseen viewpoints. Our novel representation acts as a dynamic video
encoding that captures fine details at high compression rates by factorizing
space-time into a temporal matrix-vector decomposition. This allows us to
obtain temporally coherent reconstructions of human actors for long sequences,
while representing high-resolution details even in the context of challenging
motion. While most research focuses on synthesizing at resolutions of 4MP or
lower, we address the challenge of operating at 12MP. To this end, we introduce
ActorsHQ, a novel multi-view dataset that provides 12MP footage from 160
cameras for 16 sequences with high-fidelity, per-frame mesh reconstructions. We
demonstrate challenges that emerge from using such high-resolution data and
show that our newly introduced HumanRF effectively leverages this data, making
a significant step towards production-level quality novel view synthesis.
|
https://huggingface.co/papers/2305.06356
|
2023-05-11
|
2305.06351
| 1
|
Reconstructing Animatable Categories from Videos
|
Building animatable 3D models is challenging due to the need for 3D scans,
laborious registration, and manual rigging, which are difficult to scale to
arbitrary categories. Recently, differentiable rendering provides a pathway to
obtain high-quality 3D models from monocular videos, but these are limited to
rigid categories or single instances. We present RAC that builds category 3D
models from monocular videos while disentangling variations over instances and
motion over time. Three key ideas are introduced to solve this problem: (1)
specializing a skeleton to instances via optimization, (2) a method for latent
space regularization that encourages shared structure across a category while
maintaining instance details, and (3) using 3D background models to disentangle
objects from the background. We show that 3D models of humans, cats, and dogs
can be learned from 50-100 internet videos.
|
https://huggingface.co/papers/2305.06351
|
2023-05-11
|
2305.06324
| 1
|
Alternating Gradient Descent and Mixture-of-Experts for Integrated
Multimodal Perception
|
We present Integrated Multimodal Perception (IMP), a simple and scalable
multimodal multi-task training and modeling approach. IMP integrates multimodal
inputs including image, video, text, and audio into a single Transformer
encoder with minimal modality-specific components. IMP makes use of a novel
design that combines Alternating Gradient Descent (AGD) and Mixture-of-Experts
(MoE) for efficient model \& task scaling. We conduct extensive empirical
studies about IMP and reveal the following key insights: 1) performing gradient
descent updates by alternating on diverse heterogeneous modalities, loss
functions, and tasks, while also varying input resolutions, efficiently
improves multimodal understanding. 2) model sparsification with MoE on a single
modality-agnostic encoder substantially improves the performance, outperforming
dense models that use modality-specific encoders or additional fusion layers
and greatly mitigating the conflicts between modalities. IMP achieves
competitive performance on a wide range of downstream tasks including image
classification, video classification, image-text, and video-text retrieval.
Most notably, we train a sparse IMP-MoE-L focusing on video tasks that achieves
new state-of-the-art in zero-shot video classification. Our model achieves
77.0% on Kinetics-400, 76.8% on Kinetics-600, and 76.8% on Kinetics-700
zero-shot classification accuracy, improving the previous state-of-the-art by
+5%, +6.7%, and +5.8%, respectively, while using only 15% of their total
training computational cost.
|
https://huggingface.co/papers/2305.06324
|
2023-05-11
|
2305.05973
| 1
|
Privacy-Preserving Recommender Systems with Synthetic Query Generation
using Differentially Private Large Language Models
|
We propose a novel approach for developing privacy-preserving large-scale
recommender systems using differentially private (DP) large language models
(LLMs) which overcomes certain challenges and limitations in DP training these
complex systems. Our method is particularly well suited for the emerging area
of LLM-based recommender systems, but can be readily employed for any
recommender systems that process representations of natural language inputs.
Our approach involves using DP training methods to fine-tune a publicly
pre-trained LLM on a query generation task. The resulting model can generate
private synthetic queries representative of the original queries which can be
freely shared for any downstream non-private recommendation training procedures
without incurring any additional privacy cost. We evaluate our method on its
ability to securely train effective deep retrieval models, and we observe
significant improvements in their retrieval quality without compromising
query-level privacy guarantees compared to methods where the retrieval models
are directly DP trained.
|
https://huggingface.co/papers/2305.05973
|
2023-05-11
|
2305.05706
| 1
|
DexArt: Benchmarking Generalizable Dexterous Manipulation with
Articulated Objects
|
To enable general-purpose robots, we will require the robot to operate daily
articulated objects as humans do. Current robot manipulation has heavily relied
on using a parallel gripper, which restricts the robot to a limited set of
objects. On the other hand, operating with a multi-finger robot hand will allow
better approximation to human behavior and enable the robot to operate on
diverse articulated objects. To this end, we propose a new benchmark called
DexArt, which involves Dexterous manipulation with Articulated objects in a
physical simulator. In our benchmark, we define multiple complex manipulation
tasks, and the robot hand will need to manipulate diverse articulated objects
within each task. Our main focus is to evaluate the generalizability of the
learned policy on unseen articulated objects. This is very challenging given
the high degrees of freedom of both hands and objects. We use Reinforcement
Learning with 3D representation learning to achieve generalization. Through
extensive studies, we provide new insights into how 3D representation learning
affects decision making in RL with 3D point cloud inputs. More details can be
found at https://www.chenbao.tech/dexart/.
|
https://huggingface.co/papers/2305.05706
|
2023-05-11
|
2305.06218
| 1
|
Multi-Task End-to-End Training Improves Conversational Recommendation
|
In this paper, we analyze the performance of a multitask end-to-end
transformer model on the task of conversational recommendations, which aim to
provide recommendations based on a user's explicit preferences expressed in
dialogue. While previous works in this area adopt complex multi-component
approaches where the dialogue management and entity recommendation tasks are
handled by separate components, we show that a unified transformer model, based
on the T5 text-to-text transformer model, can perform competitively in both
recommending relevant items and generating conversation dialogue. We fine-tune
our model on the ReDIAL conversational movie recommendation dataset, and create
additional training tasks derived from MovieLens (such as the prediction of
movie attributes and related movies based on an input movie), in a multitask
learning setting. Using a series of probe studies, we demonstrate that the
learned knowledge in the additional tasks is transferred to the conversational
setting, where each task leads to a 9%-52% increase in its related probe score.
|
https://huggingface.co/papers/2305.06218
|
2023-05-12
|
2305.06908
| 6
|
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency
Model
|
Denoising diffusion probabilistic models (DDPMs) have shown promising
performance for speech synthesis. However, a large number of iterative steps
are required to achieve high sample quality, which restricts the inference
speed. Maintaining sample quality while increasing sampling speed has become a
challenging task. In this paper, we propose a "Co"nsistency "Mo"del-based
"Speech" synthesis method, CoMoSpeech, which achieve speech synthesis through a
single diffusion sampling step while achieving high audio quality. The
consistency constraint is applied to distill a consistency model from a
well-designed diffusion-based teacher model, which ultimately yields superior
performances in the distilled CoMoSpeech. Our experiments show that by
generating audio recordings by a single sampling step, the CoMoSpeech achieves
an inference speed more than 150 times faster than real-time on a single NVIDIA
A100 GPU, which is comparable to FastSpeech2, making diffusion-sampling based
speech synthesis truly practical. Meanwhile, objective and subjective
evaluations on text-to-speech and singing voice synthesis show that the
proposed teacher models yield the best audio quality, and the one-step sampling
based CoMoSpeech achieves the best inference speed with better or comparable
audio quality to other conventional multi-step diffusion model baselines. Audio
samples are available at https://comospeech.github.io/.
|
https://huggingface.co/papers/2305.06908
|
2023-05-12
|
2305.07011
| 5
|
Region-Aware Pretraining for Open-Vocabulary Object Detection with
Vision Transformers
|
We present Region-aware Open-vocabulary Vision Transformers (RO-ViT) - a
contrastive image-text pretraining recipe to bridge the gap between image-level
pretraining and open-vocabulary object detection. At the pretraining phase, we
propose to randomly crop and resize regions of positional embeddings instead of
using the whole image positional embeddings. This better matches the use of
positional embeddings at region-level in the detection finetuning phase. In
addition, we replace the common softmax cross entropy loss in contrastive
learning with focal loss to better learn the informative yet difficult
examples. Finally, we leverage recent advances in novel object proposals to
improve open-vocabulary detection finetuning. We evaluate our full model on the
LVIS and COCO open-vocabulary detection benchmarks and zero-shot transfer.
RO-ViT achieves a state-of-the-art 32.1 AP_r on LVIS, surpassing the best
existing approach by +5.8 points in addition to competitive zero-shot transfer
detection. Surprisingly, RO-ViT improves the image-level representation as well
and achieves the state of the art on 9 out of 12 metrics on COCO and Flickr
image-text retrieval benchmarks, outperforming competitive approaches with
larger models.
|
https://huggingface.co/papers/2305.07011
|
2023-05-12
|
2305.06500
| 5
|
InstructBLIP: Towards General-purpose Vision-Language Models with
Instruction Tuning
|
General-purpose language models that can solve various language-domain tasks
have emerged driven by the pre-training and instruction-tuning pipeline.
However, building general-purpose vision-language models is challenging due to
the increased task discrepancy introduced by the additional visual input.
Although vision-language pre-training has been widely studied, vision-language
instruction tuning remains relatively less explored. In this paper, we conduct
a systematic and comprehensive study on vision-language instruction tuning
based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly
available datasets, transform them into instruction tuning format and
categorize them into two clusters for held-in instruction tuning and held-out
zero-shot evaluation. Additionally, we introduce instruction-aware visual
feature extraction, a crucial method that enables the model to extract
informative features tailored to the given instruction. The resulting
InstructBLIP models achieve state-of-the-art zero-shot performance across all
13 held-out datasets, substantially outperforming BLIP-2 and the larger
Flamingo. Our models also lead to state-of-the-art performance when finetuned
on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG).
Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over
concurrent multimodal models. All InstructBLIP models have been open-sourced at
https://github.com/salesforce/LAVIS/tree/main/projects/instructblip.
|
https://huggingface.co/papers/2305.06500
|
2023-05-12
|
2305.07027
| 4
|
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
Attention
|
Vision transformers have shown great success due to their high model
capabilities. However, their remarkable performance is accompanied by heavy
computation costs, which makes them unsuitable for real-time applications. In
this paper, we propose a family of high-speed vision transformers named
EfficientViT. We find that the speed of existing transformer models is commonly
bounded by memory inefficient operations, especially the tensor reshaping and
element-wise functions in MHSA. Therefore, we design a new building block with
a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN
layers, which improves memory efficiency while enhancing channel communication.
Moreover, we discover that the attention maps share high similarities across
heads, leading to computational redundancy. To address this, we present a
cascaded group attention module feeding attention heads with different splits
of the full feature, which not only saves computation cost but also improves
attention diversity. Comprehensive experiments demonstrate EfficientViT
outperforms existing efficient models, striking a good trade-off between speed
and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by
1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia
V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient
model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while
running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX
format. Code and models are available at
https://github.com/microsoft/Cream/tree/main/EfficientViT.
|
https://huggingface.co/papers/2305.07027
|
2023-05-12
|
2305.07015
| 4
|
Exploiting Diffusion Prior for Real-World Image Super-Resolution
|
We present a novel approach to leverage prior knowledge encapsulated in
pre-trained text-to-image diffusion models for blind super-resolution (SR).
Specifically, by employing our time-aware encoder, we can achieve promising
restoration results without altering the pre-trained synthesis model, thereby
preserving the generative prior and minimizing training cost. To remedy the
loss of fidelity caused by the inherent stochasticity of diffusion models, we
introduce a controllable feature wrapping module that allows users to balance
quality and fidelity by simply adjusting a scalar value during the inference
process. Moreover, we develop a progressive aggregation sampling strategy to
overcome the fixed-size constraints of pre-trained diffusion models, enabling
adaptation to resolutions of any size. A comprehensive evaluation of our method
using both synthetic and real-world benchmarks demonstrates its superiority
over current state-of-the-art approaches.
|
https://huggingface.co/papers/2305.07015
|
2023-05-12
|
2305.07017
| 3
|
An Inverse Scaling Law for CLIP Training
|
CLIP, one of the pioneering foundation models that connect images and text,
has enabled many recent breakthroughs in computer vision. However, its
associated training cost is prohibitively high, imposing a significant barrier
to its widespread exploration. In this paper, we present a surprising finding
that there exists an inverse scaling law for CLIP training, whereby the larger
the image/text encoders used, the shorter the sequence length of image/text
tokens that can be applied in training. Moreover, we showcase that the strategy
for reducing image/text token length plays a crucial role in determining the
quality of this scaling law.
As a result of this finding, we are able to successfully train CLIP even with
limited computational resources. For example, using 8 A100 GPUs, our CLIP
models achieve zero-shot top-1 ImageNet-1k accuracies of 63.2% in ~2 days,
67.8% in ~3 days, and 69.3% in ~4 days. Our method also works well when scaling
up -- with G/14, we register a new record of 83.0% ImageNet-1k zero-shot
accuracy, and meanwhile accelerate the training by ~33x compared to its
OpenCLIP counterpart. By reducing the computation barrier associated with CLIP,
we hope to inspire more research in this field, particularly from academics.
Our code is available at https://github.com/UCSC-VLAA/CLIPA.
|
https://huggingface.co/papers/2305.07017
|
2023-05-12
|
2305.06575
| 2
|
Chain-of-Dictionary Prompting Elicits Translation in Large Language
Models
|
Large language models (LLMs) have shown surprisingly good performance in
multilingual neural machine translation (MNMT) even when trained without
parallel data. Yet, despite the fact that the amount of training data is
gigantic, they still struggle with translating rare words, particularly for
low-resource languages. Even worse, it is usually unrealistic to retrieve
relevant demonstrations for in-context learning with low-resource languages on
LLMs, which restricts the practical use of LLMs for translation -- how should
we mitigate this problem? To this end, we present a novel method, CoD, which
augments LLMs with prior knowledge with the chains of multilingual dictionaries
for a subset of input words to elicit translation abilities for LLMs. Extensive
experiments indicate that augmenting ChatGPT with CoD elicits large gains by up
to 13x ChrF++ points for MNMT (3.08 to 42.63 for English to Serbian written in
Cyrillic script) on FLORES-200 full devtest set. We further demonstrate the
importance of chaining the multilingual dictionaries, as well as the
superiority of CoD to few-shot demonstration for low-resource languages.
|
https://huggingface.co/papers/2305.06575
|
2023-05-12
|
2305.07021
| 1
|
Simple Token-Level Confidence Improves Caption Correctness
|
The ability to judge whether a caption correctly describes an image is a
critical part of vision-language understanding. However, state-of-the-art
models often misinterpret the correctness of fine-grained details, leading to
errors in outputs such as hallucinating objects in generated captions or poor
compositional reasoning. In this work, we explore Token-Level Confidence, or
TLC, as a simple yet surprisingly effective method to assess caption
correctness. Specifically, we fine-tune a vision-language model on image
captioning, input an image and proposed caption to the model, and aggregate
either algebraic or learned token confidences over words or sequences to
estimate image-caption consistency. Compared to sequence-level scores from
pretrained models, TLC with algebraic confidence measures achieves a relative
improvement in accuracy by 10% on verb understanding in SVO-Probes and
outperforms prior state-of-the-art in image and group scores for compositional
reasoning in Winoground by a relative 37% and 9%, respectively. When training
data are available, a learned confidence estimator provides further improved
performance, reducing object hallucination rates in MS COCO Captions by a
relative 30% over the original model and setting a new state-of-the-art.
|
https://huggingface.co/papers/2305.07021
|
2023-05-12
|
2305.07004
| 1
|
Not All Languages Are Created Equal in LLMs: Improving Multilingual
Capability by Cross-Lingual-Thought Prompting
|
Large language models (LLMs) demonstrate impressive multilingual capability,
but their performance varies substantially across different languages. In this
work, we introduce a simple yet effective method, called cross-lingual-thought
prompting (XLT), to systematically improve the multilingual capability of LLMs.
Specifically, XLT is a generic template prompt that stimulates cross-lingual
and logical reasoning skills to enhance task performance across languages. We
conduct comprehensive evaluations on 7 typical benchmarks related to reasoning,
understanding, and generation tasks, covering both high-resource and
low-resource languages. Experimental results show that XLT not only remarkably
enhances the performance of various multilingual tasks but also significantly
reduces the gap between the average performance and the best performance of
each task in different languages. Notably, XLT brings over 10 points of average
improvement in arithmetic reasoning and open-domain question-answering tasks.
|
https://huggingface.co/papers/2305.07004
|
2023-05-12
|
2305.06594
| 1
|
V2Meow: Meowing to the Visual Beat via Music Generation
|
Generating high quality music that complements the visual content of a video
is a challenging task. Most existing visual conditioned music generation
systems generate symbolic music data, such as MIDI files, instead of raw audio
waveform. Given the limited availability of symbolic music data, such methods
can only generate music for a few instruments or for specific types of visual
input. In this paper, we propose a novel approach called V2Meow that can
generate high-quality music audio that aligns well with the visual semantics of
a diverse range of video input types. Specifically, the proposed music
generation system is a multi-stage autoregressive model which is trained with a
number of O(100K) music audio clips paired with video frames, which are mined
from in-the-wild music videos, and no parallel symbolic music data is involved.
V2Meow is able to synthesize high-fidelity music audio waveform solely
conditioned on pre-trained visual features extracted from an arbitrary silent
video clip, and it also allows high-level control over the music style of
generation examples via supporting text prompts in addition to the video frames
conditioning. Through both qualitative and quantitative evaluations, we
demonstrate that our model outperforms several existing music generation
systems in terms of both visual-audio correspondence and audio quality.
|
https://huggingface.co/papers/2305.06594
|
2023-05-12
|
2305.06555
| 1
|
Domain Incremental Lifelong Learning in an Open World
|
Lifelong learning (LL) is an important ability for NLP models to learn new
tasks continuously. Architecture-based approaches are reported to be effective
implementations for LL models. However, it is non-trivial to extend previous
approaches to domain incremental LL scenarios since they either require access
to task identities in the testing phase or cannot handle samples from unseen
tasks. In this paper, we propose Diana: a
dynamic architecture-based
lifelong learning model that tries to learn a sequence
of tasks with a prompt-enhanced language model. Four types of hierarchically
organized prompts are used in Diana to capture knowledge from different
granularities. Specifically, we dedicate task-level prompts to capture
task-specific knowledge to retain high LL performances and maintain
instance-level prompts to learn knowledge shared across input samples to
improve the model's generalization performance. Moreover, we dedicate separate
prompts to explicitly model unseen tasks and introduce a set of prompt key
vectors to facilitate knowledge sharing between tasks. Extensive experiments
demonstrate that Diana outperforms state-of-the-art LL models, especially in
handling unseen tasks. We release the code and data at
https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana.
|
https://huggingface.co/papers/2305.06555
|
2023-05-12
|
2305.06474
| 1
|
Do LLMs Understand User Preferences? Evaluating LLMs On User Rating
Prediction
|
Large Language Models (LLMs) have demonstrated exceptional capabilities in
generalizing to new tasks in a zero-shot or few-shot manner. However, the
extent to which LLMs can comprehend user preferences based on their previous
behavior remains an emerging and still unclear research question.
Traditionally, Collaborative Filtering (CF) has been the most effective method
for these tasks, predominantly relying on the extensive volume of rating data.
In contrast, LLMs typically demand considerably less data while maintaining an
exhaustive world knowledge about each item, such as movies or products. In this
paper, we conduct a thorough examination of both CF and LLMs within the classic
task of user rating prediction, which involves predicting a user's rating for a
candidate item based on their past ratings. We investigate various LLMs in
different sizes, ranging from 250M to 540B parameters and evaluate their
performance in zero-shot, few-shot, and fine-tuning scenarios. We conduct
comprehensive analysis to compare between LLMs and strong CF methods, and find
that zero-shot LLMs lag behind traditional recommender models that have the
access to user interaction data, indicating the importance of user interaction
data. However, through fine-tuning, LLMs achieve comparable or even better
performance with only a small fraction of the training data, demonstrating
their potential through data efficiency.
|
https://huggingface.co/papers/2305.06474
|
2023-05-12
|
2305.06456
| 1
|
Perpetual Humanoid Control for Real-time Simulated Avatars
|
We present a physics-based humanoid controller that achieves high-fidelity
motion imitation and fault-tolerant behavior in the presence of noisy input
(e.g. pose estimates from video or generated from language) and unexpected
falls. Our controller scales up to learning ten thousand motion clips without
using any external stabilizing forces and learns to naturally recover from
fail-state. Given reference motion, our controller can perpetually control
simulated avatars without requiring resets. At its core, we propose the
progressive multiplicative control policy (PMCP), which dynamically allocates
new network capacity to learn harder and harder motion sequences. PMCP allows
efficient scaling for learning from large-scale motion databases and adding new
tasks, such as fail-state recovery, without catastrophic forgetting. We
demonstrate the effectiveness of our controller by using it to imitate noisy
poses from video-based pose estimators and language-based motion generators in
a live and real-time multi-person avatar use case.
|
https://huggingface.co/papers/2305.06456
|
2023-05-12
|
2305.06424
| 1
|
Bot or Human? Detecting ChatGPT Imposters with A Single Question
|
Large language models like ChatGPT have recently demonstrated impressive
capabilities in natural language understanding and generation, enabling various
applications including translation, essay writing, and chit-chatting. However,
there is a concern that they can be misused for malicious purposes, such as
fraud or denial-of-service attacks. Therefore, it is crucial to develop methods
for detecting whether the party involved in a conversation is a bot or a human.
In this paper, we propose a framework named FLAIR, Finding Large language model
Authenticity via a single Inquiry and Response, to detect conversational bots
in an online manner. Specifically, we target a single question scenario that
can effectively differentiate human users from bots. The questions are divided
into two categories: those that are easy for humans but difficult for bots
(e.g., counting, substitution, positioning, noise filtering, and ASCII art),
and those that are easy for bots but difficult for humans (e.g., memorization
and computation). Our approach shows different strengths of these questions in
their effectiveness, providing a new way for online service providers to
protect themselves against nefarious activities and ensure that they are
serving real users. We open-sourced our dataset on
https://github.com/hongwang600/FLAIR and welcome contributions from the
community to enrich such detection datasets.
|
https://huggingface.co/papers/2305.06424
|
2023-05-12
|
2305.06404
| 1
|
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits
Siamese-BLOOM
|
Text embeddings are useful features for several NLP applications, such as
sentence similarity, text clustering, and semantic search. In this paper, we
present a Low-rank Adaptation with a Contrastive objective on top of 8-bit
Siamese-BLOOM, a multilingual large language model optimized to produce
semantically meaningful word embeddings. The innovation is threefold. First, we
cast BLOOM weights to 8-bit values. Second, we fine-tune BLOOM with a scalable
adapter (LoRA) and 8-bit Adam optimizer for sentence similarity classification.
Third, we apply a Siamese architecture on BLOOM model with a contrastive
objective to ease the multi-lingual labeled data scarcity. The experiment
results show the quality of learned embeddings from LACoS-BLOOM is proportional
to the number of model parameters and the amount of unlabeled training data.
With the parameter efficient fine-tuning design, we are able to run BLOOM 7.1
billion parameters end-to-end on a single GPU machine with 32GB memory.
Compared to previous solution Sentence-BERT, we achieve significant improvement
on both English and multi-lingual STS tasks.
|
https://huggingface.co/papers/2305.06404
|
2023-05-14
|
2305.07185
| 9
|
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
|
Autoregressive transformers are spectacular models for short sequences but
scale poorly to long sequences such as high-resolution images, podcasts, code,
or books. We proposed Megabyte, a multi-scale decoder architecture that enables
end-to-end differentiable modeling of sequences of over one million bytes.
Megabyte segments sequences into patches and uses a local submodel within
patches and a global model between patches. This enables sub-quadratic
self-attention, much larger feedforward layers for the same compute, and
improved parallelism during decoding -- unlocking better performance at reduced
cost for both training and generation. Extensive experiments show that Megabyte
allows byte-level models to perform competitively with subword models on long
context language modeling, achieve state-of-the-art density estimation on
ImageNet, and model audio from raw files. Together, these results establish the
viability of tokenization-free autoregressive sequence modeling at scale.
|
https://huggingface.co/papers/2305.07185
|
2023-05-14
|
2305.07243
| 5
|
Better speech synthesis through scaling
|
In recent years, the field of image generation has been revolutionized by the
application of autoregressive transformers and DDPMs. These approaches model
the process of image generation as a step-wise probabilistic processes and
leverage large amounts of compute and data to learn the image distribution.
This methodology of improving performance need not be confined to images. This
paper describes a way to apply advances in the image generative domain to
speech synthesis. The result is TorToise -- an expressive, multi-voice
text-to-speech system.
All model code and trained weights have been open-sourced at
https://github.com/neonbjb/tortoise-tts.
|
https://huggingface.co/papers/2305.07243
|
2023-05-14
|
2305.07490
| 1
|
ArtGPT-4: Artistic Vision-Language Understanding with Adapter-enhanced
MiniGPT-4
|
In recent years, large language models (LLMs) have made significant progress
in natural language processing (NLP), with models like ChatGPT and GPT-4
achieving impressive capabilities in various linguistic tasks. However,
training models on such a large scale is challenging, and finding datasets that
match the model's scale is often difficult. Fine-tuning and training models
with fewer parameters using novel methods have emerged as promising approaches
to overcome these challenges. One such model is MiniGPT-4, which achieves
comparable vision-language understanding to GPT-4 by leveraging novel
pre-training models and innovative training strategies. However, the model
still faces some challenges in image understanding, particularly in artistic
pictures. A novel multimodal model called ArtGPT-4 has been proposed to address
these limitations. ArtGPT-4 was trained on image-text pairs using a Tesla A100
device in just 2 hours, using only about 200 GB of data. The model can depict
images with an artistic flair and generate visual code, including aesthetically
pleasing HTML/CSS web pages. Furthermore, the article proposes novel benchmarks
for evaluating the performance of vision-language models. In the subsequent
evaluation methods, ArtGPT-4 scored more than 1 point higher than the current
state-of-the-art model and was only 0.25 points lower than artists on
a 6-point scale. Our code and pre-trained model are available at
https://huggingface.co/Tyrannosaurus/ArtGPT-4.
|
https://huggingface.co/papers/2305.07490
|
2023-05-15
|
2305.08379
| 3
|
TESS: Text-to-Text Self-Conditioned Simplex Diffusion
|
Diffusion models have emerged as a powerful paradigm for generation,
obtaining strong performance in various domains with continuous-valued inputs.
Despite the promises of fully non-autoregressive text generation, applying
diffusion models to natural language remains challenging due to its discrete
nature. In this work, we propose Text-to-text Self-conditioned Simplex
Diffusion (TESS), a text diffusion model that is fully non-autoregressive,
employs a new form of self-conditioning, and applies the diffusion process on
the logit simplex space rather than the typical learned embedding space.
Through extensive experiments on natural language understanding and generation
tasks including summarization, text simplification, paraphrase generation, and
question generation, we demonstrate that TESS outperforms state-of-the-art
non-autoregressive models and is competitive with pretrained autoregressive
sequence-to-sequence models.
|
https://huggingface.co/papers/2305.08379
|
2023-05-15
|
2305.07447
| 3
|
Universal Source Separation with Weakly Labelled Data
|
Universal source separation (USS) is a fundamental research task for
computational auditory scene analysis, which aims to separate mono recordings
into individual source tracks. There are three potential challenges awaiting
the solution to the audio source separation task. First, previous audio source
separation systems mainly focus on separating one or a limited number of
specific sources. There is a lack of research on building a unified system that
can separate arbitrary sources via a single model. Second, most previous
systems require clean source data to train a separator, while clean source data
are scarce. Third, there is a lack of USS system that can automatically detect
and separate active sound classes in a hierarchical level. To use large-scale
weakly labeled/unlabeled audio data for audio source separation, we propose a
universal audio source separation framework containing: 1) an audio tagging
model trained on weakly labeled data as a query net; and 2) a conditional
source separation model that takes query net outputs as conditions to separate
arbitrary sound sources. We investigate various query nets, source separation
models, and training strategies and propose a hierarchical USS strategy to
automatically detect and separate sound classes from the AudioSet ontology. By
solely leveraging the weakly labelled AudioSet, our USS system is successful in
separating a wide variety of sound classes, including sound event separation,
music source separation, and speech enhancement. The USS system achieves an
average signal-to-distortion ratio improvement (SDRi) of 5.57 dB over 527 sound
classes of AudioSet; 10.57 dB on the DCASE 2018 Task 2 dataset; 8.12 dB on the
MUSDB18 dataset; an SDRi of 7.28 dB on the Slakh2100 dataset; and an SSNR of
9.00 dB on the voicebank-demand dataset. We release the source code at
https://github.com/bytedance/uss
|
https://huggingface.co/papers/2305.07447
|
2023-05-15
|
2305.08850
| 1
|
Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts
|
The text-driven image and video diffusion models have achieved unprecedented
success in generating realistic and diverse content. Recently, the editing and
variation of existing images and videos in diffusion-based generative models
have garnered significant attention. However, previous works are limited to
editing content with text or providing coarse personalization using a single
visual clue, rendering them unsuitable for indescribable content that requires
fine-grained and detailed control. In this regard, we propose a generic video
editing framework called Make-A-Protagonist, which utilizes textual and visual
clues to edit videos with the goal of empowering individuals to become the
protagonists. Specifically, we leverage multiple experts to parse source video,
target visual and textual clues, and propose a visual-textual-based video
generation model that employs mask-guided denoising sampling to generate the
desired output. Extensive results demonstrate the versatile and remarkable
editing capabilities of Make-A-Protagonist.
|
https://huggingface.co/papers/2305.08850
|
2023-05-15
|
2305.07615
| 1
|
What are the Desired Characteristics of Calibration Sets? Identifying
Correlates on Long Form Scientific Summarization
|
Summarization models often generate text that is poorly calibrated to quality
metrics because they are trained to maximize the likelihood of a single
reference (MLE). To address this, recent work has added a calibration step,
which exposes a model to its own ranked outputs to improve relevance or, in a
separate line of work, contrasts positive and negative sets to improve
faithfulness. While effective, much of this work has focused on how to generate
and optimize these sets. Less is known about why one setup is more effective
than another. In this work, we uncover the underlying characteristics of
effective sets. For each training instance, we form a large, diverse pool of
candidates and systematically vary the subsets used for calibration
fine-tuning. Each selection strategy targets distinct aspects of the sets, such
as lexical diversity or the size of the gap between positive and negatives. On
three diverse scientific long-form summarization datasets (spanning biomedical,
clinical, and chemical domains), we find, among others, that faithfulness
calibration is optimal when the negative sets are extractive and more likely to
be generated, whereas for relevance calibration, the metric margin between
candidates should be maximized and surprise--the disagreement between model and
metric defined candidate rankings--minimized. Code to create, select, and
optimize calibration sets is available at
https://github.com/griff4692/calibrating-summaries
|
https://huggingface.co/papers/2305.07615
|
2023-05-15
|
2305.07558
| 1
|
Measuring Progress in Fine-grained Vision-and-Language Understanding
|
While pretraining on large-scale image-text data from the Web has facilitated
rapid progress on many vision-and-language (V&L) tasks, recent work has
demonstrated that pretrained models lack "fine-grained" understanding, such as
the ability to recognise relationships, verbs, and numbers in images. This has
resulted in an increased interest in the community to either develop new
benchmarks or models for such capabilities. To better understand and quantify
progress in this direction, we investigate four competitive V&L models on four
fine-grained benchmarks. Through our analysis, we find that X-VLM (Zeng et al.,
2022) consistently outperforms other baselines, and that modelling innovations
can impact performance more than scaling Web data, which even degrades
performance sometimes. Through a deeper investigation of X-VLM, we highlight
the importance of both novel losses and rich data sources for learning
fine-grained skills. Finally, we inspect training dynamics, and discover that
for some tasks, performance peaks early in training or significantly
fluctuates, never converging.
|
https://huggingface.co/papers/2305.07558
|
2023-05-15
|
2305.07514
| 1
|
BlendFields: Few-Shot Example-Driven Facial Modeling
|
Generating faithful visualizations of human faces requires capturing both
coarse and fine-level details of the face geometry and appearance. Existing
methods are either data-driven, requiring an extensive corpus of data not
publicly accessible to the research community, or fail to capture fine details
because they rely on geometric face models that cannot represent fine-grained
details in texture with a mesh discretization and linear deformation designed
to model only a coarse face geometry. We introduce a method that bridges this
gap by drawing inspiration from traditional computer graphics techniques.
Unseen expressions are modeled by blending appearance from a sparse set of
extreme poses. This blending is performed by measuring local volumetric changes
in those expressions and locally reproducing their appearance whenever a
similar expression is performed at test time. We show that our method
generalizes to unseen expressions, adding fine-grained effects on top of smooth
volumetric deformations of a face, and demonstrate how it generalizes beyond
faces.
|
https://huggingface.co/papers/2305.07514
|
2023-05-15
|
2305.07378
| 1
|
Surfacing Biases in Large Language Models using Contrastive Input
Decoding
|
Ensuring that large language models (LMs) are fair, robust and useful
requires an understanding of how different modifications to their inputs impact
the model's behaviour. In the context of open-text generation tasks, however,
such an evaluation is not trivial. For example, when introducing a model with
an input text and a perturbed, "contrastive" version of it, meaningful
differences in the next-token predictions may not be revealed with standard
decoding strategies. With this motivation in mind, we propose Contrastive Input
Decoding (CID): a decoding algorithm to generate text given two inputs, where
the generated text is likely given one input but unlikely given the other. In
this way, the contrastive generations can highlight potentially subtle
differences in how the LM output differs for the two inputs in a simple and
interpretable manner. We use CID to highlight context-specific biases that are
hard to detect with standard decoding strategies and quantify the effect of
different input perturbations.
|
https://huggingface.co/papers/2305.07378
|
2023-05-15
|
2305.07214
| 1
|
MMG-Ego4D: Multi-Modal Generalization in Egocentric Action Recognition
|
In this paper, we study a novel problem in egocentric action recognition,
which we term as "Multimodal Generalization" (MMG). MMG aims to study how
systems can generalize when data from certain modalities is limited or even
completely missing. We thoroughly investigate MMG in the context of standard
supervised action recognition and the more challenging few-shot setting for
learning new action categories. MMG consists of two novel scenarios, designed
to support security, and efficiency considerations in real-world applications:
(1) missing modality generalization where some modalities that were present
during the train time are missing during the inference time, and (2)
cross-modal zero-shot generalization, where the modalities present during the
inference time and the training time are disjoint. To enable this
investigation, we construct a new dataset MMG-Ego4D containing data points with
video, audio, and inertial motion sensor (IMU) modalities. Our dataset is
derived from Ego4D dataset, but processed and thoroughly re-annotated by human
experts to facilitate research in the MMG problem. We evaluate a diverse array
of models on MMG-Ego4D and propose new methods with improved generalization
ability. In particular, we introduce a new fusion module with modality dropout
training, contrastive-based alignment training, and a novel cross-modal
prototypical loss for better few-shot performance. We hope this study will
serve as a benchmark and guide future research in multimodal generalization
problems. The benchmark and code will be available at
https://github.com/facebookresearch/MMG_Ego4D.
|
https://huggingface.co/papers/2305.07214
|
2023-05-15
|
2305.07440
| 1
|
Optimizing Memory Mapping Using Deep Reinforcement Learning
|
Resource scheduling and allocation is a critical component of many high
impact systems ranging from congestion control to cloud computing. Finding more
optimal solutions to these problems often has significant impact on resource
and time savings, reducing device wear-and-tear, and even potentially improving
carbon emissions. In this paper, we focus on a specific instance of a
scheduling problem, namely the memory mapping problem that occurs during
compilation of machine learning programs: That is, mapping tensors to different
memory layers to optimize execution time.
We introduce an approach for solving the memory mapping problem using
Reinforcement Learning. RL is a solution paradigm well-suited for sequential
decision making problems that are amenable to planning, and combinatorial
search spaces with high-dimensional data inputs. We formulate the problem as a
single-player game, which we call the mallocGame, such that high-reward
trajectories of the game correspond to efficient memory mappings on the target
hardware. We also introduce a Reinforcement Learning agent, mallocMuZero, and
show that it is capable of playing this game to discover new and improved
memory mapping solutions that lead to faster execution times on real ML
workloads on ML accelerators. We compare the performance of mallocMuZero to the
default solver used by the Accelerated Linear Algebra (XLA) compiler on a
benchmark of realistic ML workloads. In addition, we show that mallocMuZero is
capable of improving the execution time of the recently published AlphaTensor
matrix multiplication model.
|
https://huggingface.co/papers/2305.07440
|
2023-05-15
|
2305.07153
| 0
|
Towards best practices in AGI safety and governance: A survey of expert
opinion
|
A number of leading AI companies, including OpenAI, Google DeepMind, and
Anthropic, have the stated goal of building artificial general intelligence
(AGI) - AI systems that achieve or exceed human performance across a wide range
of cognitive tasks. In pursuing this goal, they may develop and deploy AI
systems that pose particularly significant risks. While they have already taken
some measures to mitigate these risks, best practices have not yet emerged. To
support the identification of best practices, we sent a survey to 92 leading
experts from AGI labs, academia, and civil society and received 51 responses.
Participants were asked how much they agreed with 50 statements about what AGI
labs should do. Our main finding is that participants, on average, agreed with
all of them. Many statements received extremely high levels of agreement. For
example, 98% of respondents somewhat or strongly agreed that AGI labs should
conduct pre-deployment risk assessments, dangerous capabilities evaluations,
third-party model audits, safety restrictions on model usage, and red teaming.
Ultimately, our list of statements may serve as a helpful foundation for
efforts to develop best practices, standards, and regulations for AGI labs.
|
https://huggingface.co/papers/2305.07153
|
2023-05-16
|
2305.07759
| 36
|
TinyStories: How Small Can Language Models Be and Still Speak Coherent
English?
|
Language models (LMs) are powerful tools for natural language processing, but
they often struggle to produce coherent and fluent text when they are small.
Models with around 125M parameters such as GPT-Neo (small) or GPT-2 (small) can
rarely generate coherent and consistent English text beyond a few words even
after extensive training. This raises the question of whether the emergence of
the ability to produce coherent English text only occurs at larger scales (with
hundreds of millions of parameters or more) and complex architectures (with
many layers of global attention).
In this work, we introduce TinyStories, a synthetic dataset of short stories
that only contain words that a typical 3 to 4-year-olds usually understand,
generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train
and evaluate LMs that are much smaller than the state-of-the-art models (below
10 million total parameters), or have much simpler architectures (with only one
transformer block), yet still produce fluent and consistent stories with
several paragraphs that are diverse and have almost perfect grammar, and
demonstrate reasoning capabilities.
We also introduce a new paradigm for the evaluation of language models: We
suggest a framework which uses GPT-4 to grade the content generated by these
models as if those were stories written by students and graded by a (human)
teacher. This new paradigm overcomes the flaws of standard benchmarks which
often requires the model's output to be very structures, and moreover provides
a multidimensional score for the model, providing scores for different
capabilities such as grammar, creativity and consistency.
We hope that TinyStories can facilitate the development, analysis and
research of LMs, especially for low-resource or specialized domains, and shed
light on the emergence of language capabilities in LMs.
|
https://huggingface.co/papers/2305.07759
|
2023-05-16
|
2305.09636
| 13
|
SoundStorm: Efficient Parallel Audio Generation
|
We present SoundStorm, a model for efficient, non-autoregressive audio
generation. SoundStorm receives as input the semantic tokens of AudioLM, and
relies on bidirectional attention and confidence-based parallel decoding to
generate the tokens of a neural audio codec. Compared to the autoregressive
generation approach of AudioLM, our model produces audio of the same quality
and with higher consistency in voice and acoustic conditions, while being two
orders of magnitude faster. SoundStorm generates 30 seconds of audio in 0.5
seconds on a TPU-v4. We demonstrate the ability of our model to scale audio
generation to longer sequences by synthesizing high-quality, natural dialogue
segments, given a transcript annotated with speaker turns and a short prompt
with the speakers' voices.
|
https://huggingface.co/papers/2305.09636
|
2023-05-16
|
2305.08596
| 9
|
DarkBERT: A Language Model for the Dark Side of the Internet
|
Recent research has suggested that there are clear differences in the
language used in the Dark Web compared to that of the Surface Web. As studies
on the Dark Web commonly require textual analysis of the domain, language
models specific to the Dark Web may provide valuable insights to researchers.
In this work, we introduce DarkBERT, a language model pretrained on Dark Web
data. We describe the steps taken to filter and compile the text data used to
train DarkBERT to combat the extreme lexical and structural diversity of the
Dark Web that may be detrimental to building a proper representation of the
domain. We evaluate DarkBERT and its vanilla counterpart along with other
widely used language models to validate the benefits that a Dark Web domain
specific model offers in various use cases. Our evaluations show that DarkBERT
outperforms current language models and may serve as a valuable resource for
future research on the Dark Web.
|
https://huggingface.co/papers/2305.08596
|
2023-05-16
|
2305.09617
| 5
|
Towards Expert-Level Medical Question Answering with Large Language
Models
|
Recent artificial intelligence (AI) systems have reached milestones in "grand
challenges" ranging from Go to protein-folding. The capability to retrieve
medical knowledge, reason over it, and answer medical questions comparably to
physicians has long been viewed as one such grand challenge.
Large language models (LLMs) have catalyzed significant progress in medical
question answering; Med-PaLM was the first model to exceed a "passing" score in
US Medical Licensing Examination (USMLE) style questions with a score of 67.2%
on the MedQA dataset. However, this and other prior work suggested significant
room for improvement, especially when models' answers were compared to
clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by
leveraging a combination of base LLM improvements (PaLM 2), medical domain
finetuning, and prompting strategies including a novel ensemble refinement
approach.
Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM
by over 19% and setting a new state-of-the-art. We also observed performance
approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU
clinical topics datasets.
We performed detailed human evaluations on long-form questions along multiple
axes relevant to clinical applications. In pairwise comparative ranking of 1066
consumer medical questions, physicians preferred Med-PaLM 2 answers to those
produced by physicians on eight of nine axes pertaining to clinical utility (p
< 0.001). We also observed significant improvements compared to Med-PaLM on
every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form
"adversarial" questions to probe LLM limitations.
While further studies are necessary to validate the efficacy of these models
in real-world settings, these results highlight rapid progress towards
physician-level performance in medical question answering.
|
https://huggingface.co/papers/2305.09617
|
2023-05-16
|
2305.07922
| 5
|
CodeT5+: Open Code Large Language Models for Code Understanding and
Generation
|
Large language models (LLMs) pretrained on vast source code have achieved
prominent progress in code intelligence. However, existing code LLMs have two
main limitations in terms of architecture and pretraining tasks. First, they
often adopt a specific architecture (encoder-only or decoder-only) or rely on a
unified encoder-decoder network for different downstream tasks. The former
paradigm is limited by inflexibility in applications while in the latter, the
model is treated as a single system for all tasks, leading to suboptimal
performance on a subset of tasks. Secondly, they often employ a limited set of
pretraining objectives which might not be relevant to some downstream tasks and
hence result in substantial performance degrade. To address these limitations,
we propose ``CodeT5+'', a family of encoder-decoder LLMs for code in which
component modules can be flexibly combined to suit a wide range of downstream
code tasks. Such flexibility is enabled by our proposed mixture of pretraining
objectives to mitigate the pretrain-finetune discrepancy. These objectives
cover span denoising, contrastive learning, text-code matching, and causal LM
pretraining tasks, on both unimodal and bimodal multilingual code corpora.
Furthermore, we propose to initialize CodeT5+ with frozen off-the-shelf LLMs
without training from scratch to efficiently scale up our models, and explore
instruction-tuning to align with natural language instructions. We extensively
evaluate CodeT5+ on over 20 code-related benchmarks in different settings,
including zero-shot, finetuning, and instruction-tuning. We observe
state-of-the-art (SoTA) model performance on various code-related tasks, such
as code generation and completion, math programming, and text-to-code retrieval
tasks. Particularly, our instruction-tuned CodeT5+ 16B achieves new SoTA
results on HumanEval code generation task against other open code LLMs.
|
https://huggingface.co/papers/2305.07922
|
2023-05-16
|
2305.08848
| 4
|
Small Models are Valuable Plug-ins for Large Language Models
|
Large language models (LLMs) such as GPT-3 and GPT-4 are powerful but their
weights are often publicly unavailable and their immense sizes make the models
difficult to be tuned with common hardware. As a result, effectively tuning
these models with large-scale supervised data can be challenging. As an
alternative, In-Context Learning (ICL) can only use a small number of
supervised examples due to context length limits. In this paper, we propose
Super In-Context Learning (SuperICL) which allows black-box LLMs to work with
locally fine-tuned smaller models, resulting in superior performance on
supervised tasks. Our experiments demonstrate that SuperICL can improve
performance beyond state-of-the-art fine-tuned models while addressing the
instability problem of in-context learning. Furthermore, SuperICL can enhance
the capabilities of smaller models, such as multilinguality and
interpretability.
|
https://huggingface.co/papers/2305.08848
|
2023-05-16
|
2305.09662
| 3
|
Make-An-Animation: Large-Scale Text-conditional 3D Human Motion
Generation
|
Text-guided human motion generation has drawn significant interest because of
its impactful applications spanning animation and robotics. Recently,
application of diffusion models for motion generation has enabled improvements
in the quality of generated motions. However, existing approaches are limited
by their reliance on relatively small-scale motion capture data, leading to
poor performance on more diverse, in-the-wild prompts. In this paper, we
introduce Make-An-Animation, a text-conditioned human motion generation model
which learns more diverse poses and prompts from large-scale image-text
datasets, enabling significant improvement in performance over prior works.
Make-An-Animation is trained in two stages. First, we train on a curated
large-scale dataset of (text, static pseudo-pose) pairs extracted from
image-text datasets. Second, we fine-tune on motion capture data, adding
additional layers to model the temporal dimension. Unlike prior diffusion
models for motion generation, Make-An-Animation uses a U-Net architecture
similar to recent text-to-video generation models. Human evaluation of motion
realism and alignment with input text shows that our model reaches
state-of-the-art performance on text-to-motion generation.
|
https://huggingface.co/papers/2305.09662
|
End of preview. Expand
in Data Studio
From the Frontier Research Team at Takara.ai, we present Daily Papers Popularity — a dataset tracking the popularity of Hugging Face Papers with arXiv metadata. It aggregates daily paper entries with votes, IDs, titles, abstracts (backfilled via the HF API), and URLs, enabling analysis of patterns in paper reception and engagement.
Daily Papers Popularity
- Columns:
date,arxiv_id,votes,title,abstract,url - Format: Parquet
Load
from datasets import load_dataset
ds = load_dataset("takara-ai/daily-papers-popularity")
Visualisations
Reference charts derived from the dataset. Each visual links to static assets hosted alongside the dataset.
Votes vs Title Length
Votes vs Abstract Length
Votes vs Month
Votes vs Day of Month
Distribution: Daily Paper Concentration
Votes vs Daily Paper Concentration
For research inquiries and press, please reach out to [email protected]
人類を変革する
- Downloads last month
- 76





