Fact-checked by Grok 2 weeks ago

GVfs

GVfs is a userspace filesystem for GIO, a available in GLib since 2.16, designed to provide seamless filesystem access and management for applications in . It enables GIO-based applications to treat diverse storage concepts, such as remote protocols and resources, as part of the local filesystem, using URI-based addressing for full identification. Key components include a set of backends supporting protocols like , /CIFS, HTTP, , FTP, NFS, and others, along with modules for volume monitoring, persistent metadata storage, and limited integration to expose mounts to non-GIO applications. GVfs operates in userspace via , running mounts as separate processes to enhance security and modularity, while integrating directly with GIO's I/O abstractions for hot-plugging and dynamic resource handling in .

Overview

Purpose and design goals

GVfs is a userspace virtual filesystem designed for the desktop environment, serving as the primary implementation for accessing non-local files and resources through the GIO library's I/O . It enables applications to interact with diverse storage types, including local files, remote servers, and peripheral devices, in a manner that abstracts away the underlying complexities of different protocols and hardware interfaces. The core design goals of GVfs emphasize providing a unified interface that allows seamless file operations across local and remote systems, regardless of the storage medium. This includes support for URI-based access schemes such as smb:// for Samba shares and sftp:// for secure file transfers, enabling applications to reference resources using standardized location identifiers without needing protocol-specific code. Additionally, GVfs prioritizes asynchronous I/O operations to ensure non-blocking behavior suitable for graphical user interfaces, utilizing thread pools and backend methods like try_ and do_ to handle operations efficiently. Extensibility is another key goal, achieved through a modular backend system that allows integration of custom handlers for new protocols or devices. As a userspace implementation, GVfs operates without kernel-level dependencies, running daemons and libraries in user processes to enhance and portability within ecosystem on platforms. This approach facilitates deep integration with desktop applications, such as file managers like , by providing volume monitoring and filtered metadata that aligns with user expectations for intuitive file handling. GVfs is licensed under the GNU Lesser version 2.0 or later, promoting its reuse in projects while maintaining compatibility with the broader stack.

Relation to GIO and GnomeVFS

GVfs serves as the primary backend implementation for the virtual filesystem (VFS) layer within GIO, a comprehensive I/O abstraction library introduced in GLib version 2.15.0 in 2007. This integration enables GIO to provide unified access to local files, network protocols, and virtual resources through a consistent , abstracting away the complexities of diverse storage backends. By handling operations via userspace daemons that communicate over , GVfs ensures efficient and secure file I/O without embedding filesystem logic directly into applications. As a direct successor to GnomeVFS, GVfs was designed to overcome the architectural shortcomings of its predecessor, which relied on a monolithic single-process model prone to threading complications and concurrency limitations. GnomeVFS required applications to initialize threading early and forced backends to be thread-safe, often leading to issues like reentrancy problems during blocking operations and serialized access in protocols such as or FTP due to non-thread-safe libraries. In response, GVfs shifts to a model of separate userspace daemons for each mount point, improving , reducing memory overhead, and eliminating the need for application-level threading in the VFS layer. GVfs introduces significant enhancements in modularity and extensibility compared to GnomeVFS's rigid structure. Its pluggable backend architecture allows developers to easily integrate support for new protocols, such as SFTP, SMB, or DAV, without altering the core GIO interface. Furthermore, GVfs supports integration with FUSE (Filesystem in Userspace), bridging virtual mounts to the standard POSIX filesystem for access by non-GIO applications, thereby addressing GnomeVFS's limitation where remote resources were inaccessible to command-line tools or other software. This design avoids the single-process bottlenecks of GnomeVFS, where thread locks could hinder parallel operations, and promotes better performance through daemon-specific resource management. Ultimately, GVfs empowers GIO applications to interact with a wide array of resources—ranging from local devices to remote servers—transparently, without requiring knowledge of the underlying protocols or backend implementations. This fosters portability and simplifies development, as applications rely solely on GIO's high-level for all file operations.

History and development

Origins and replacement of GnomeVFS

GVfs originated from discussions within in , where developers identified significant limitations in GnomeVFS that hindered and maintainability. Key issues included the absence of userspace isolation, which forced all backends to be thread-safe and negatively impacted performance, as well as challenges in extending protocols due to the monolithic architecture. Alexander Larsson, a prominent contributor, proposed a new design emphasizing a daemon-per-mount model to provide better isolation and extensibility. Initial development of GVfs commenced under Larsson's leadership, aligning with the preparation for 2.22 released in 2008. This effort involved creating backends for essential protocols such as and FTP, enabling secure and standard network file access from the outset. A core aspect of the early implementation was the use of for , which facilitated the daemon-per-mount approach by allowing separate processes for each mount point while maintaining session-wide state sharing. By late 2007, the related GIO library—providing the API layer for GVfs—had been merged into GLib, setting the stage for broader adoption. The replacement of GnomeVFS proceeded with its deprecation shortly after GVfs's introduction in GNOME 2.22, as developers were encouraged to migrate to the new system for improved asynchronous operations and backend flexibility. This transition culminated in GNOME 3, released in 2011, where GVfs became the standard virtual file system infrastructure for handling storage and file operations across the desktop environment. Larsson presented on GVfs's design and migration strategies at GUADEC 2007, underscoring its role as a direct successor to address GnomeVFS's shortcomings.

Key milestones and releases

GVfs was initially released as part of 2.22 on March 12, 2008, introducing a userspace designed to integrate with GIO and address limitations of the previous GnomeVFS architecture. By the release of 3.0 on April 6, 2011, GVfs had achieved full adoption within the ecosystem, with GnomeVFS fully deprecated and removed from core components to streamline operations. Version 1.20.0, released on August 23, 2014, brought significant enhancements including improved support for mobile devices through better MTP backend integration, allowing more reliable file access and transfer for and similar devices. Version 1.40.0, released on March 11, 2019, included general improvements and stability updates. Version 1.50.0, released on March 18, 2022, included general stability improvements across backends. The most recent stable release, 1.58.0, arrived on September 9, 2025, featuring bug fixes for to ensure consistent detection and handling of mounted devices and networks. GVfs is maintained through the repository, driven by a community of contributors emphasizing compatibility with modern environments like since GNOME 3.20 in 2016. Over its history, numerous backends have been added iteratively to support diverse protocols and devices, with deprecations of insecure modes to prioritize secure alternatives like .

Architecture

Core components and daemons

GVfs consists of several core components that enable its integration with the GIO library and provide virtual filesystem functionality. The primary shared library component is the GVfs GIO module, which extends GIO's I/O abstractions to support non-local file operations and is loaded dynamically by applications using GLib's GIO framework. This module allows seamless access to GVfs-managed resources without requiring applications to handle backend-specific details directly. Additionally, GVfs includes supporting libraries such as those in the gvfs-libs package, which provide common functions shared between daemons and the GIO module for efficient operation. At the heart of GVfs is the master daemon, gvfsd, which serves as the central coordinator for operations and provides the org.gtk.vfs.Daemon on the user's session bus. It automatically starts when accessed by GIO clients and manages the lifecycle of mounts by spawning and tracking individual backend processes, ensuring isolation and resource efficiency. For instance, when a remote protocol like is accessed, gvfsd launches a dedicated per-backend daemon such as gvfsd-sftp to handle the specific protocol operations in a separate process, preventing failures in one backend from affecting others. Another key daemon is gvfsd-metadata, which serializes writes to GVfs's internal metadata storage, enabling applications like the file manager to store and retrieve file tags, emblems, and custom attributes in a user-specific database located at $XDG_DATA_HOME/gvfs-metadata. Read operations for metadata are handled client-side by GIO to minimize latency. To ensure compatibility with traditional applications that do not use GIO, GVfs employs the gvfsd-fuse daemon, which implements a (Filesystem in Userspace) interface to expose active GVfs mounts as a regular filesystem. This daemon creates a virtual mount point, typically at /run/user/$UID/gvfs (following XDG Base Directory specifications) or the legacy ~/.gvfs/ directory, allowing any application to access GVfs resources through standard file paths. All GVfs daemons, including gvfsd, gvfsd-fuse, gvfsd-metadata, and per-backend instances, operate as unprivileged user processes rather than system-wide services, enhancing security by limiting their scope and facilitating communication via for inter-process coordination. This user-centric design isolates mounts to individual sessions and prevents privilege escalation risks associated with kernel-level filesystems.

D-Bus communication and APIs

GVfs utilizes as its primary mechanism for , leveraging the session bus to facilitate interactions between the master daemon gvfsd, backend daemons, and client applications. This architecture enables efficient, asynchronous operations such as file monitoring and mount management, where client requests from the GIO library are proxied to the appropriate daemons via messages. For performance reasons, GVfs employs private connections between components, avoiding bottlenecks on the shared session bus. The GIO library exposes a GVfs that provides essential APIs for filesystem operations, including mounting and unmounting resources through methods like g_vfs_get_file_for_uri() for resolving URIs into GFile objects and g_vfs_parse_name() for handling names. These APIs abstract the underlying calls, allowing applications to perform non-local I/O without direct awareness of the communication layer. Additionally, URI resolution functions such as g_vfs_get_supported_uri_schemes() enable discovery of supported protocols. Volume monitoring in GVfs is handled through GIO's GVolumeMonitor APIs, which detect and report changes in drives, volumes, and mounts via signals emitted by backend monitors like udisks2. Key methods include g_volume_monitor_get_volumes() to list available volumes and g_volume_monitor_get_mounts() to retrieve active mounts, while signals such as mount-added, mount-removed, and volume-changed allow applications to respond asynchronously to hot-plug events and filesystem alterations. Security in GVfs's D-Bus communication is enhanced by the per-user session bus, which provides inherent sandboxing by isolating each user's daemons and applications within their own bus , preventing cross-user interference. For network backends, is managed through the GMountOperation , which prompts for credentials via signals like ask-password and integrates with keyrings for secure storage, ensuring protected access to remote resources during mounting. GVfs supports the org.gtk.vfs.Mountable D-Bus interface, which defines methods such as Mount() for initiating mounts with specifications like automount options and Unmount() for ejecting devices, enabling standardized operations across backends. The org.gtk.vfs.MountTracker interface complements this by providing mount listing and registration methods, along with signals like Mounted and Unmounted for real-time tracking.

Backends and protocols

Network access backends

GVfs provides several backends dedicated to accessing remote file systems over network protocols, enabling seamless integration with GIO applications for operations like reading, writing, and metadata retrieval on distant servers. These backends operate as dedicated daemons, such as gvfsd-sftp for over SSH, which supports URI schemes like sftp://user@host/path and handles encrypted transfers natively through SSH authentication mechanisms. The gvfsd-ftp backend implements the File Transfer Protocol (FTP), allowing access via URIs such as ftp://host/path, with support for read, write, and delete operations on unencrypted connections; for secure variants, users may combine it with external tunneling, though GVfs emphasizes native protocol security where available. The gvfsd-http backend supports HTTP access using URIs like http://host/path, built on libsoup for handling web resources, though primarily read-only for file-like operations. Similarly, the gvfsd-dav backend facilitates Web Distributed Authoring and Versioning (WebDAV) access using dav://host/path or davs://host/path for TLS-encrypted sessions, relying on libraries like libsoup for HTTP handling and libxml for parsing, while supporting authentication via HTTP basic or digest methods. In version 1.49.90, DAV was ported to libsoup3. For Windows-compatible shares, GVfs employs the gvfsd-smb backend, built on libsmbclient, to connect via smb://server/share URIs; this enables mounting of (SMB)/Common Internet File System (CIFS) resources, with support for 2 and 3 protocols introduced in version 1.40 for improved and , including in compatible environments. The gvfsd-smb-browse backend for share discovery is disabled by default since version 1.55.1. Additional network backends include gvfsd-nfs for (NFS) mounts using nfs://server/path and libnfs for versions 2 and 3 compatibility, as well as gvfsd-afp for (AFP) via afp://host/volume, targeting legacy macOS and Apple network shares. More recent additions include the backend (added in version 1.25.92 using libgdata) supporting google-drive:// URIs for cloud storage integration, and the backend (added in version 1.53.90 using the msgraph library) for access via onedrive:// URIs, with features like support in later updates. Across these backends, authentication is managed through user prompts integrated with the Keyring, storing credentials securely for subsequent sessions via the application, while each protocol handles its inherent encryption—such as SSH for or TLS for DAVS—without reliance on external wrappers like Files transferred over (FISH). This design ensures isolated, per-user access to remote resources, with daemons spawning on demand to maintain security and resource efficiency.

Device and media backends

GVfs provides specialized backends for accessing physical devices and handling media operations, enabling seamless integration with hardware such as cameras, portable media players, and optical drives within the GNOME desktop environment. These backends operate as dedicated daemons, each managing a specific type of device or media through virtual filesystem mounts that extend the GIO API. For instance, the gvfsd-gphoto2 backend utilizes the GPhoto2 library to support digital cameras, allowing users to browse and transfer photos from devices like DSLRs and compact cameras as if they were local filesystems. Similarly, the gvfsd-mtp backend implements the Media Transfer Protocol (MTP) for devices such as Android smartphones and portable media players, facilitating file access and synchronization without requiring additional drivers. The gvfsd-afc backend handles Apple's proprietary Apple File Conduit protocol, providing read-write access to iOS devices like iPhones and iPod Touches, supporting tasks such as file transfer and media management directly from the desktop using libimobiledevice. In addition to device-specific backends, GVfs includes support for various media formats and operations through dedicated daemons. The gvfsd-cdda backend enables mounting of audio CDs via the (CDDA) protocol, allowing applications to read track data and extract audio files using URIs like cdda://sr0, which integrates with libraries such as libcdio-paranoia for reliable playback and ripping. For , the gvfsd-burn backend offers virtual filesystem access to burning operations, supporting the creation and management of ISO images and data discs by interfacing with underlying kernel modules; however, it is disabled by default since version 1.55.1. Archive handling is managed by the gvfsd-archive backend, which leverages the libarchive library to mount common formats including , , and files, treating their contents as navigable directories for extraction and, in supported cases, writing operations. Beyond direct device and media access, GVfs incorporates utility backends for enhanced desktop functionality. The gvfsd-trash backend implements a cross-filesystem trash mechanism, aggregating deleted files from multiple mount points into a unified virtual directory, which prevents data loss across diverse volumes and supports restoration via the GIO API. Complementing this, the gvfsd-recent backend tracks recently accessed files across the system, maintaining a virtual folder that applications can query for quick access to user history without duplicating . GVfs integrates closely with UDisks2 for block device management, where the gvfs-udisks2-volume-monitor serves as the primary volume monitoring component, detecting and mounting such as USB drives and SD cards through signals. This monitor, developed by David Zeuthen, ensures real-time updates for device insertion and removal, leveraging UDisks2's policy-based automation for secure and efficient handling of storage hardware. These backends collectively support hot plugging mechanisms, enabling automatic detection and mounting of devices upon connection to maintain a fluid .

Features

FUSE integration

GVfs integrates with (Filesystem in Userspace) through the gvfsd-fuse daemon, which creates a user-specific mount point to expose GVfs backends as a standard filesystem. This daemon is automatically started by the main gvfsd process and registers itself via to handle the FUSE mount, typically at /run/user/$UID/gvfs where $UID is the user's ID; if this path is unavailable, it falls back to ~/.gvfs. The mount presents a flat directory structure of active GVfs mounts, named according to their GMountSpec (e.g., sftp://example.com/), allowing POSIX-compliant file operations on virtual resources without requiring applications to use the GIO API directly. This integration was introduced to bridge the gap between GIO-based applications and traditional tools or legacy software, enabling seamless access to GVfs-managed resources like remote filesystems or media devices via standard file I/O calls. For instance, command-line utilities such as cp or ls can interact with mounted URIs through the layer, while permissions from the underlying GVfs backends are mirrored to maintain security consistency. The daemon auto-starts on user and persists until logout, ensuring the mount remains available without manual intervention. Despite these advantages, the FUSE integration introduces limitations, particularly a performance overhead for high-I/O workloads due to the userspace- context switching inherent in . Not all GVfs backends fully support semantics; for example, some lack seeking capabilities or efficient random access, leading to suboptimal behavior in certain scenarios. Additionally, it requires the to be loaded and the user to be in the fuse group on systems enforcing such restrictions. The daemon terminates if the master gvfsd process exits, ensuring cleanup but potentially disrupting access during session changes. As of GVfs 1.58.0 (November 2025), recent enhancements include performance improvements such as filling stat info during readdir (1.55.90) to reduce overhead.

Hot plugging mechanism

The hot plugging mechanism in GVfs enables dynamic detection and management of removable storage devices, such as USB drives and memory cards, without requiring user intervention or application restarts. When a device is connected, the Linux kernel detects the hardware event and generates a uevent, which is processed by systemd-udevd, the device manager daemon. Systemd-udevd then triggers notifications through D-Bus to relevant services, including the udisks2 daemon (udisksd) and GVfs components like gvfsd and volume monitors. The GVfs role in this process is primarily handled by the gvfs-udisks2-volume-monitor, a specialized volume monitor that integrates with udisks2 to track block devices and file systems. Upon receiving the event from udisksd, gvfs-udisks2-volume-monitor queries the device details, creates corresponding GVolume and GMount objects via the GIO library, and emits signals such as "volume-added" to notify the . This monitor also supports providing icons, labels, and other metadata for seamless integration with file managers like , allowing automatic mounting based on user preferences configured through GSettings. Since GVfs 1.51.1, enhancements to hot plugging support for protocols include improved MTP handling with incremental , delete events on disconnection, and crash prevention during unmounting. Further updates in 1.58.0 added MTP file rename support and cancellable folder enumerations, while 1.57.1 introduced AFC edit mode for devices with version-specific detection and fixes for persistent mounts after disconnection. These updates, as of version 1.58.0 (November 2025), ensure more reliable hotplug detection for modern mobile storage. Overall, this mechanism propagates events via signals across GVfs daemons and applications, guaranteeing seamless addition and removal of devices while applications remain operational. Recent stability improvements to the udisks2 volume monitor, such as increased reference counts to prevent crashes (1.56.1), enhance reliability.

Integration and packaging

Usage in GNOME environment

GVfs serves as the foundational virtual file system in the desktop environment, primarily powering the file manager to enable unified browsing of local and remote files through its GIO-based scheme. This integration allows to transparently access diverse backends, such as local devices, network shares via or , and , presenting them as standard directories in the user interface. A key aspect of this usage is the support for drag-and-drop operations across heterogeneous URIs, permitting users to transfer files seamlessly between local volumes and remote locations without intermediary steps or specialized software. For command-line interactions, the gio utility exposes GVfs functionality, allowing users to mount resources directly, as in the example gio mount smb://server/share to connect to a share. Authentication and credential storage for these operations are handled through Online Accounts, which integrates with GVfs to securely manage access to online services like , ensuring persistent mounts across sessions. GVfs enhances desktop experience by managing volume icons in the overview for quick navigation to mounted devices and networks. Search capabilities are bolstered by the gvfsd-recent daemon, which implements the recent:// to track and retrieve recently accessed files across backends, integrating with and other applications for efficient file discovery. Trash operations are similarly unified via the gvfsd-trash backend, enabling consistent deletion and recovery of files from both local and remote locations. In 40 and subsequent releases, GVfs is indispensable for sessions, delivering services entirely in userspace without kernel VFS dependencies, thereby supporting the compositor's security model and portal-based access controls.

Distribution-specific packaging

In and its derivatives like , GVfs is distributed across multiple packages to promote modularity and allow users to install only the components they need. The core gvfs package provides the userspace virtual filesystem implementation and GIO module for seamless integration. The gvfs-daemons package includes the mount helper daemons that run as separate processes for handling backend operations. Additionally, gvfs-bin supplies command-line tools such as gvfs-mount and gvfs-ls for managing virtual file systems, while gvfs-backends bundles optional protocol support, including /CIFS for network shares and other specialized backends like . This split packaging enables fine-grained control, such as excluding network-related backends in security-conscious environments. Fedora and Red Hat Enterprise Linux (RHEL) adopt an RPM-based approach where the primary gvfs package encompasses the core backends for protocols like FTP, SFTP, and CIFS, ensuring broad functionality out of the box. Subpackages extend this with specific features, such as gvfs-gphoto2 for camera support via libgphoto2 dependencies and gvfs-smb for enhanced Windows . RHEL mirrors this structure in its repositories, prioritizing stability and dependency management through RPM for enterprise deployments. Arch Linux provides a more monolithic gvfs package in its extra repository, which includes the essential virtual filesystem components and common backends for immediate usability. Optional separate packages, like gvfs-smb for SMB/CIFS support, allow customization without bloating minimal installations. This design suits Arch's rolling-release model, where users can exclude network backends in lightweight or embedded setups to minimize resource usage. Such distribution-specific packaging strategies ensure that only required daemons and backends are active, thereby reducing the overall by limiting exposure to potential vulnerabilities in unused components. By late 2025, major distributions including , , and Arch have updated to GVfs version 1.58.0, incorporating recent patches and backend improvements.

References

  1. [1]
    Projects/gvfs – GNOME Wiki Archive
    GVfs is a userspace virtual filesystem implementation for GIO (a library available in GLib). GVfs comes with a set of backends, including trash support, SFTP, ...
  2. [2]
    GitHub - gicmo/gvfs: Virtual filesystem for the GNOME desktop
    gvfs is a userspace virtual filesystem designed to work with the i/o abstractions of gio (a library availible in glib >= 2.15.1).
  3. [3]
    Chapter 15. Virtual File Systems and Disk Management
    GVFS provides complete virtual file system infrastructure and handles storage in the GNOME Desktop. GVFS uses addresses for full identification based on the URI ...
  4. [4]
    gvfs(7) - Arch manual pages
    gvfs provides implementations that go beyond that and allow to access files and storage using many protocols, such as ftp, http, sftp, dav, nfs, etc.<|control11|><|separator|>
  5. [5]
    15.6. Exposing GNOME Virtual File Systems to All Other Applications
    In addition to applications built with the GIO library being able to access GVFS mounts, GVFS also provides a FUSE daemon which exposes active GVFS mounts.
  6. [6]
    Gvfs - Virtual File Systems - Unix Memo - Read the Docs
    Gvfs is a userspace virtual filesystem where mount runs as a separate processes which you talk to via D-Bus. It also contains a gio module that seamlessly adds ...
  7. [7]
    GNOME / gvfs · GitLab
    - **Purpose**: GVfs is a userspace virtual filesystem implementation for GIO, providing backends like trash support, SFTP, SMB, HTTP, DAV, and more. It includes volume monitors, metadata storage, and FUSE support for non-GIO applications.
  8. [8]
    Projects/gvfs/doc – GNOME Wiki Archive
    This document describes some aspects of GVfs architecture and explains reasons why things are done the way they are. It is intended for developers as a starting ...Missing: Virtual | Show results with:Virtual
  9. [9]
  10. [10]
    NEWS · master · GNOME / gvfs - GitLab
    Sep 9, 2025 · * Use gio from glib (glib 2.15.1 required). * Fix translation issues. * Fix various sftp backend issues. * Move .mount files to /usr/share/gvfs ...
  11. [11]
    GNOME 2.22 planning: GIO and GVFS proposed for inclusion
    Sep 28, 2007 · The GIO library is currently used to develop GVFS, a userspace virtual filesystem framework that is being designed to replace the aging GnomeVFS ...
  12. [12]
    Gio – 2.0: Overview - GTK Documentation
    The GVfs implementation for local files that is included in GIO has the name local , the implementation in the GVfs module has the name gvfs .
  13. [13]
    Plans for gnome-vfs replacement - The Mail Archive
    Sep 18, 2006 · In order to avoid all the problems with threading described above the vfs daemon will not use threads. In fact, I think the best approach is ...Missing: bottlenecks | Show results with:bottlenecks
  14. [14]
    Plans for gnome-vfs replacement
    Sep 18, 2006 · The ideal level for a VFS would be in glib, in a separate library similar to gthread or gobject. That way gtk+ would be able to integrate with it and all gnome ...Missing: origins | Show results with:origins
  15. [15]
    Features/Gvfs - Fedora Project Wiki
    Mar 17, 2008 · GIO, which is a new shared library that is part of GLib and provides the API for gvfs. Gvfs itself, which is a new package containing backends ...
  16. [16]
    [PDF] Migrating the Thunar File Manager to the Extensible Asynchronous ...
    Oct 1, 2009 · GIO was, along with GVfs, developed by Alexander Larsson with the goal to even- tually replace GnomeVFS, a VFS layer previously used inside the ...
  17. [17]
    Chapter 1. Introducing the GNOME 3 Desktop | 7
    GVFS provides complete virtual file system infrastructure and handles storage in the GNOME Desktop in general. Through GVFS , GNOME 3 integrates well with ...
  18. [18]
    GNOME 2.22 released, brings new architectural features
    Mar 12, 2008 · Among the most important enhancements in GNOME 2.22 are the GVFS virtual file system framework, which brings improved network transparency ...Missing: initial | Show results with:initial
  19. [19]
    Index of /sources/gvfs/1.20/
    - **Release Date for GVfs 1.20.0**: 2014-Aug-23 11:13
  20. [20]
    Index of /sources/gvfs/1.40/
    Index of /sources/gvfs/1.40/ ; gvfs-1.40.0.news, 62 B · 2019-Mar-11 13:48 ; gvfs-1.40.0.sha256sum, 168 B · 2019-Mar-11 13:48 ; gvfs-1.40.0.tar.xz, 1.1 MiB, 2019-Mar- ...
  21. [21]
    GVFS-SMB write problems to SMB2 shares (#373) · Issue - GitLab
    Feb 13, 2019 · Client SMB pings the server in packet 1 who responds in packet 2, queries info on the folder in packet 4 and gets some basic info back in packet ...
  22. [22]
    Debian -- Details of package gvfs-libs in sid
    gvfs is a userspace virtual filesystem where mounts run as separate processes which you talk to via D-Bus. It also contains a gio module.Missing: libgvfs | Show results with:libgvfs
  23. [23]
    gvfsd(1) — Arch manual pages
    ### Summary of gvfsd Daemon
  24. [24]
    gvfsd-metadata(1) — Arch manual pages
    - **Description**: `gvfsd-metadata` is a daemon that serializes writes to the internal gvfs metadata storage. It autostarts when GIO clients modify metadata.
  25. [25]
    gvfsd-fuse(1) — Arch manual pages
    - **Description**: gvfsd-fuse is a Fuse daemon for gvfs, enabling POSIX applications to access gvfs backends.
  26. [26]
    Gio.Vfs
    ### Summary of GVfs Class and Related GIO Features
  27. [27]
    Gio.VolumeMonitor
    ### Summary of GVolumeMonitor APIs for Detecting Changes, Volumes, Mounts, Drives
  28. [28]
    Gio.MountOperation
    ### Summary: GMountOperation for Authentication in GVfs Context
  29. [29]
    common/org.gtk.vfs.xml · master · GNOME / gvfs · GitLab
    ### Extracted Key D-Bus Interfaces from org.gtk.vfs.xml
  30. [30]
    Projects/gvfs/backends – GNOME Wiki Archive
    ### Summary of GVfs Network Access Backends
  31. [31]
    Chapter 15. Browsing files on a network share | 9
    GVFS URI example. SSH. ssh://user@server.example.com/path. NFS. nfs://server/path. Windows SMB. smb://server/Share. WebDAV. dav://example.server.com/path.Missing: sftp | Show results with:sftp
  32. [32]
    Details of package gvfs-backends in sid
    It also supports exposing the gvfs mounts to non-gio applications using fuse. This package contains the afc, afp, archive, cdda, dav, dnssd, ftp, gphoto2, http, ...
  33. [33]
    Impossible to open smb share file (#307) · Issue · GNOME/gvfs
    Jul 21, 2017 · This is with ubuntu disco as the client, gvfs 1.40.1-1 (debian sync), using samba 4.10.0 libs: andreas@disco-desktop:~$ pkill gvfsd-smb ...
  34. [34]
    15.7. Password Management of GVFS Mounts
    A typical GVFS mount asks for credentials on its activation unless the resource allows anonymous authentication or does not require any at all.
  35. [35]
    README.md · master - gvfs - GitLab - GNOME
    Dec 20, 2024 · GVfs comes with a set of backends, including trash support, SFTP, SMB, HTTP, DAV, and many others. GVfs also contains modules for GIO that ...Missing: network protocols
  36. [36]
    Bug 759952 – gvfs-mount cdda://sr0 fails - GNOME Bugzilla
    Using gvfs-1.24.1, gvfs-mount fails to mount an audio cd, even though it has been compiled against libcdio-paranoia. I can play an audio cd using gstreamer ...
  37. [37]
    gvfsd-archive write support (#103) · Issue · GNOME/gvfs
    Jul 24, 2009 · The backend is written around the libarchive library [1] which sets further limitations for every format it supports. Write operations over the ...
  38. [38]
    start backend (mount and volume monitors) on demand (#275) · Issue
    Feb 11, 2016 · I am trying to use a DSLR camera as a webcam following this guide, but in order to achieve this I have to always kill gvfs-gphoto2-volume- ...
  39. [39]
    Chapter 16. Troubleshooting volume management in GNOME
    The gvfsd-fuse daemon requires a path where it can expose its services. When the /run/user/UID/gvfs/ path is unavailable, gvfsd-fuse uses the ~/.gvfs ...
  40. [40]
  41. [41]
    How does mounting on the GUI work "under the hood"
    Nov 13, 2013 · It consists of two parts: A shared library which is loaded by applications supporting GIO;; GVFS itself, which contains a collection of daemons ...<|separator|>
  42. [42]
    Chapter 15. Managing storage volumes in GNOME | 8
    GNOME uses GVFS to manage storage volumes. Mount via the Files app using URIs, and unmount via the Files app's unmount icon.Missing: adoption | Show results with:adoption
  43. [43]
    gvfs - GIO virtual file system - Ubuntu Manpage
    gvfs is a GIO virtual file system that provides access to files and storage using protocols like ftp, http, sftp, dav, and nfs.
  44. [44]
    Debian -- Details of package gvfs in sid
    gvfs is a userspace virtual filesystem where mounts run as separate processes which you talk to via D-Bus. It also contains a gio module.
  45. [45]
    gvfs - Fedora Packages
    Upstream: https://wiki.gnome.org/Projects/gvfs · License(s): LGPL-2.0-or-later AND GPL-3.0-only AND MPL-2.0 AND BSD-3-Clause-Sun · Maintainers: oholy, alexl ...
  46. [46]
    gvfs 1.58.0-2 (x86_64) - Arch Linux
    View the file list for gvfs. Links to so-names. View the soname list for gvfs. Copyright © 2002-2025 Judd Vinet, Aaron Griffin and Levente Polyák.
  47. [47]
    USN-3888-1: GVfs vulnerability | Ubuntu security notices
    Feb 12, 2019 · It was discovered that GVfs incorrectly handled certain inputs. An attacker could possibly use this issue to access sensitive information.Missing: surface | Show results with:surface