I’ve reviewed the app and the docs and I’d like to say that I’m impressed with your idea. I have the motivation to create something similar, only based on a more mature data model that the current SAAS corporations cannot compete with.
ActivityWatch’s APIs and interfaces are built around the concept of 1:1 machine-to-user experience. They work quite well for that task but cannot be scaled up to higher layers of abstraction. The watchers are not considered as individual sensors collecting observations per se (although heartbeats are close to the notion) but as active elements also producing guaranteed results. Abstractly, a sensor does not support events that have a beginning and an end; sensors only declare a current instantaneous state of observation. This is a real-world problem seeing that today a person is likely going to be using many devices (each with many sensors) concurrently. [And hopefully the 1970’s concept of one-device-for-many-users is as dead as terminal screens.]
A model that competing SAAS infrastructures cannot support is where the observational data is collected between multiple devices and analyzed at a layer above the device. The “AFK” issue could be resolved by, say, a TensorFlow ANN that runs on one of the devices and concludes that the user had stopped using “computer A’s keyboard” and started taking steps 5 seconds ago, thus the user is walking to a meeting, etc. Modern p2p network protocols easily solves the synchronization issues between devices and user-based labeling would help evolve the analysis process over time.
Do you think the goal or abstracting away from the “event” model and towards an integrated “sensor” model might be in keeping with this project’s goals?
Would love to get the project that far. About sensors not having a start and end is easily circumventing by simply omitting the duration when sending events. About abstracting away categorization by hostname to categorization of user is definitely something we’d be interested in doing in the long-term, but as of now we don’t have any data which would make this useful.
As a platform this makes me think of Zenobase. We’d love to extend this far for ActivityWatch because what we really want to do this for is to log your life. ActivityWatch currently has a focus on your digital life, but if we feel one day that we have completed that task we’d love to expand to log more devices and sensors. We don’t have much resources to develop ActivityWatch though (2 maintainers developing it in their sparetime with some community contributed editor watchers) so I don’t believe this will be a reality anytime soon.
I’m wondering about this: do you think it’s within the freedom of the Mozilla 2 license, and more importantly the spirit of the project, for a lexical parser to be written above the source code that changes the API to use another one? Duplicating the work that you’ve been doing seems rather wasteful, but perhaps some Phython code that converts your code to another API might work. It would be complicated and might break a lot but also might save duplication effort and could bring about a collaborative mentality. The worst case scenario being a hard/unmergable fork at some point.
In any case, a code compatibility with a very thin “sensor data update” API could help out your project as well.
Yes, that’s very close to my goal. However Zenobase only supports REST OAuth 2.0, as do the other services such as IFTTT, etc. They work great for reports and asynchronous batch mode analysis but can’t do much while the sensors are going about recording the data.
It would be nice to directly have access to raw sensor data in realtime, and be able to have some form of analysis intelligence to handle realtime events.
My personal motivation for a sensor-analysis-intelligence-actuation system is rather simplistic, but still highly technically challenging. I’m a hard worker but tend to be easily distracted and so feel that I need a better time management system. I’ve reviewed basically all known time tracking apps and have expanded on them by writing custom planning systems that allow me to have very granular control over my daily schedule. Integrating reports into a feedback loop is simple, as I can simply adjust the min/max time for some task or goal (task-series). These function great. The only problem is that I just ignore them
What I would like to do is to create an open source sensor-analysis-intelligence-actuation system that can analyze all of a users’ sensor data in realtime, and to adjust the users daily plan accordingly. Congested commute this morning? Then your alarm goes off 5 minutes earlier. Feeling tired and yawning while stuck on a boring task? Your screen locks and you have to take a 10 minute walk to get more energy. The possibilities are endless.
This, unfortunately, has to be an open source project since the SAAS companies have a vested interest in preventing any form of data sharing between themselves, especially when it comes to realtime data. While I’ve already done a lot of work on the intelligence and actuation layers, the sensor laying is going to be a lot of work unless cooperation between projects happens.
Note that I’m also coming up with some novel solutions for the analysis layer. The SAAS infrastructures require all data by synchronized at the centralized server before analysis can take place. That’s 1970’s style design. Luckily more modern peer-to-peer designs are winning out. I’m planning on looking at how MPC (multi-party computation) in an ad-hoc network can help analysis occur faster, since independent devices could share results of algorithms without having to share the raw sensor data, thus saving synchronization time and producing valuable results faster.
I plan to look into your source code to see if it’s viable to have a lex parser be able to convert the watchers into a different, more thinned-down API that I could leverage.
That’s fine, technically it could even be proprietary as the MPL is not copyleft. MPL allows the code to be re-used in proprietary applications but require that the MPL licenced code should still be open source (including modifications of the code). So for example including a proprietary library in activitywatch is legaly doable, but we’d prefer to avoid that unless we feel it would be very valuable.
Well, the prefered way would obviously be if we could make a common API in aw-server which is flexible enough to support both use-cases. What do you feel is restricting you in our base Event API? The heartbeat API is built upon this under the hood and is technically just an abstraction above that doing some assumptions to make clients easier to implement. Similarily we could also have a sensor API in aw-server which does some assumption for sensor data and is based on the underlying event design. To me it seems though as the Event API should be pretty good for sensor data, but I could be wrong.
Would be pretty neat with a layer listening to new incoming events and could send a signal once a specific type on event arrives in a specific bucket, for example “user is no longer AFK” or a rain/snow sensor which could say “it’s snowing outside”. This would fix so we don’t have to poll and we could even expose this over websockets so the information could be consumed by someone else and so aw-server doesn’t have to do the actual decision about what should happen when such an action occurs. By dispatching that notice of an event to some other application the application could then in turn for example change the time of your alarm, tell your car to start warm up before work or something.